Re: Why doesn't libgcc define _chkstk on MinGW?
Mark Mitchell wrote: >So, my (perhaps naive) question is: why don't we define _chkstk as an >alias for _alloca in MinGW, so that we can link with these MSVC libraries? It defines __chkstk as an alias for _alloca instead. My guess is that this is a mistake. There are other MSC library functions that MinGW doesn't provide, so other libraries may not link even with a _chkstk alias. Ross Ridge
Re: Why doesn't libgcc define _chkstk on MinGW?
Ross Ridge wrote: >There are other MSC library functions that MinGW doesn't provide, so >libraries may not link even with a _chkstk alias. Mark Mitchell wrote: >Got a list? Probably the most common missing symbols, using their assembler names are: __ftol2 @[EMAIL PROTECTED] ___security_cookie These are newer symbols in the MS CRT library and also cause problems for Visual C++ 6.0 users. I've worked around the missing security cookie symbols by providing my own stub implementation, but apparently newer versions of the Platform SDK include a library that fully implement these. I'm not sure how _ftol2 is supposed to be different from _ftol, but since I use -ffast-math anyways, I've just used the following code as a work around: long _ftol2(double f) { return (long) f; } Looking at an old copy of MSVCRT.LIB (c. 1998) other missing symbols that might be a problem include: T __alldiv [I] T __allmul [I] T __alloca_probe [I][*] T __allrem [I] T __allshl [I][*] T __allshr [I] T __aulldiv [I] T __aullrem [I] T __aullshr [I] A __except_list [I][*] T __matherr [D] T __setargv [D] T ___setargv [X] A __tls_array [I] B __tls_index [I] R __tls_used [I] T __wsetargv [D] [D] Documented external interface [I] Implicitly referenced by the MSC compiler [X] Undocumented external interface [*] Missing symbols I've encountered The are other problems related to linking that can make an MSC compiled static library incompatible including not processing MSC intialization and termination sections, no support for thread-local variables and broken COMDAT section handling. Ross Ridge
Re: Threading the compiler
Mike Stump writes: >We're going to have to think seriously about threading the compiler. Intel >predicts 80 cores in the near future (5 years). [...] To use this many >cores for a single compile, we have to find ways to split the work. The >best way, of course is to have make -j80 do that for us, this usually >results in excellent efficiencies and an ability to use as many cores >as there are jobs to run. Umm... those 80 processors that Intel is talking about are more like the 8 coprocessors in the Cell CPU. It's not going to give you an 80-way SMP machine that you can just "make -j80" on. If that's really your target achitecture you're going to have to come up with some really innovative techniques to take advantage of it in GCC. I don't think working on parallelizing GCC for 4- and 8-way SMP systems is going to give you much of a head start. Which isn't to say it wouldn't be a worthy enough project in it's own right. Ross Ridge
Re: strict aliasing question
Howard Chu wrote: > extern void getit( void **arg ); > > main() { >union { >int *foo; >void *bar; >} u; > >getit( &u.bar ); >printf("foo: %x\n", *u.foo); > } Rask Ingemann Lambertsen wrote: > As far as I know, memcpy() is the answer: You don't need a union or memcpy() to convert the pointer types. You can solve the "void **" aliasing problem with just a cast: void *p; getit(&p); printf("%d\n", *(int *)p); This assumes that getit() actually writes to an "int" object and returns a "void *" pointer to that object. If it doesn't then you have another aliasing problem to worry about. If it writes to the object using some other known type, then you need two casts to make it safe: void *p; getit(&p); printf("%d\n", (int)*(long *)p); If writes to the object using an unknown type then you might able to use memcpy() to get around the aliasing problem, but this assumes you know that two types are compatable at the bit level: void *p; int n; getit(&p); memcpy(&n, p, sizeof n); printf("%d\n", n); The best solution would be to fix the interface so that it returns the pointer types it acutally uses. This would make it typesafe and you wouldn't need to use any casts. If you can't fix the interface itself the next best thing would be to create your own wrappers which put all the nasty casts in one place: int sasl_getprop_str(sasl_conn_t *conn, int prop, char const **pvalue) { assert(prop == SASL_AUTHUSER || prop == SASL_APPNAME || ...); void *tmp; int r = sasl_getprop(conn, prop, &tmp); if (r == SASL_OK) *pvalue = (char const *) tmp; return r; } Unfortuantely, there are aliasing problems in the Cyrus SASL source that can still come around and bite you once LTO arrives no matter what you do in your own code. You might want to see if you can't get them to change undefined code like this: *(unsigned **)pvalue = &conn->oparams.maxoutbuf; into code like this: *pvalue = (void *) &conn->oparams.maxoutbuf; Ross Ridge
Re: Threading the compiler
Ross Ridge wrote: >Umm... those 80 processors that Intel is talking about are more like the >8 coprocessors in the Cell CPU. Michael Eager wrote: >No, the Cell is asymmetrical (vintage 2000) architecture. The Cell CPU as a whole is asymmetrical, but I'm only comparing the design to the 8 identical coprocessors (of which only 7 are enabled in the CPU used in the PlayStation 3). >Intel & AMD have announced that they are developing large multi-core >symmetric processors. The timelines I've seen say that the number of >cores on each chip will double every year or two. This doesn't change that fact that SMP systems don't scale well after 16 processors or so. To go beyond that you need a different design. Clustering and NUMA have been ways of solving the problem outside the chip. Intel's plan for solving it inside the chip involves giving each of the 80 cores it's own 32 MB of SRAM and only connecting each core to its immediate neighbours. This is similiar to the Cell SPE's. Each has 256K of local memory and they're all connected together in a ring. > Moore's law hasn't stopped. While Moore's Law may still be holding on, bus and memory speeds aren't doubling every two years. You can't design an 80 core CPU like an 4 core CPU with 20 times as many cores. Having 80 processors all competing over the same bus for the same memory won't work. Neither will "make -j80". You need to do more than just divide up the work between different processes or threads. You need to divide up the program and data into chunks that will fit into each core's local memory and orchestrate everything so that the data propagates smoothly between cores. > The number of gates per chip doubles every 18 months. Actually, in fact it's closer to doubling every 24 months and Gordon Moore never said it would double every 18 months. Originaly in 1965 he said that the number of components doubled every year, in 1975 after things slowed down he revised it to doubling every two years. Ross Ridge
Re: bootstrap failure on HEAD
Dave Korn writes: >Is it just me, or does anyone else get this? I objdump'd and diff'd the >stage2 and stage3 versions of cfg.o and it seems to have developed a habit of >inserting 'shrd'/'shld' opcodes: It looks to me like the stage3 version with the shrd/shld is correct and it's that stage2 version that's missing opcodes. In both versions the source and destination of the shift are a 64-bit pair of registers, but the stage2 version uses 32-bit shifts, while the stage3 version uses 64-bit shitfs. The code in the first chunk looks like it's the result of the expansion of the RDIV macro with the dividend being a "gcov_type" value and the divisor being 65536. It looks like "gcov_type" is 64-bits, so it should be using 64-bit arithmetic. > although disturbingly enough there's a missing 'lea' too: It's a NOP. Probably inserted by the assembler because of an alignment directive. Ross Ridge
Re: I need some advice for x86_64-pc-mingw32 va_list calling convention (in i386.c)
Kai Tietz writes: >I detected, that the MS-ABI does not support SSE or MMX argument passing >(as gcc does for x86_64). Therefore I search some advice about the >enforcement of passing (sse,mmx) registers passing via the standard >integer registers (for MS they are ecx,edx,r9) and via stack. I think you may be confused here. You don't pass registers, you pass values. Micrsoft's x64 ABI documents how to pass values with SSE and MMX types: __m128 types, arrays and strings are never passed by immediate value but rather a pointer will be passed to memory allocated by the caller. Structs/unions of size 8, 16, 32, or 64 bits and __m64 will be passed as if they were integers of the same size. Structs/unions other than these sizes will be passed as a pointer to memory allocated by the caller. For these aggregate types passed as a pointer (including __m128), the caller-allocated temporary memory will be 16-byte aligned. Since __m128 types are the equivilent of GCC's 128-bit vector types (SSE), values of this type should be passed by reference. GCC's 64-bit vector types (MMX) should by passed by value using an integer register. This is how SSE and MMX values should be passed regardless of wether the function takes a variable number of arguments or not. Ross Ridge
Re: I need some advice for x86_64-pc-mingw32 va_list calling convention (in i386.c)
Kai Tietz writes: >But I still have a problem about va-argument-passing. The MS compiler >reserves stack space for all may va-callable methods register arguments. Passing arguments to functions with variable arguments isn't a special case here. According to Microsoft's documentation, you always need to allocate space for 4 arguments. The only thing different you need to do with functions taking variable arguments (and unprototyped functions) is to pass floating point values both in the integer and floating point registers for that argument. Ross Ridge
Re: symbol names are not created with stdcall syntax: MINGW, (GCC) 4.3.0 20061021
Danny Smith writes: >Unless you are planning to use a gfortran dll in a VisualBasic app, I >can see little reason to change from the default "C" calling convention FX Coudert writes: >That precise reason is, as far as I understand, important for some >people. Fortran code is used for internal routines, built into shared >libraries that are later plugged into commercial apps. Well, perhaps things are different in Fortran, but the big problem with using -mrtd in C/C++ is that it changes the default calling convention for all functions, not just those that are ment to be exported. While most of MinGW's of headers declare the calling convention of functions explictily, not all of them do. >How hard do you think it would be to implement a -mrtd-naming option >(or another name) to go with -mrtd and add name decorations It wouldn't be too hard, but I don't think it would be a good idea to implement. It would mislead people into thinking the option might be useful, and -mrtd fools enough people as it is. Adding name decorations won't make it more useful. From the examples I've seen, VisualBasic 6 has no problem DLL functions expored without "@n" suffixes. Any library that needs to be able to be called from VisualBasic 6 or some other "stdcall only" environment should explictly declare it's exported functions with the stdcall calling convention. Ross Ridge
Re: symbol names are not created with stdcall syntax: MINGW, (GCC) 4.3.0 20061021
Ross Ridge wrote: > Any library that needs to be able to be called from VisualBasic 6 or some > other "stdcall only" environment should explictly declare it's exported > functions with the stdcall calling convention. Tobias Burnus writes: > Thus, if I understood you correctly, you recommend that we add, e.g., > pragma support to gfortran with a pragma which adds the > __attribute__((stdcall)) to the tree? I have no idea what would be the best way to do it in Fortran, but yes, something that would add the stdcall attribute. Ross Ridge
Re: Building mainline and 4.2 on Debian/amd64
Joe Buck writes: >This brings up a point: the build procedure doesn't work by default on >Debian-like amd64 distros, because they lack 32-bit support (which is >present on Red Hat/Fedora/SuSE/etc distros). Ideally this would be >detected when configuring. The Debian-like AMD64 system I'm using has 32-bit support, but the build procedure breaks anyways because it assumes 32-bit libraries are in "lib" and 64-bit libraries are in "lib64". Instead, this Debian-like AMD64 system has 32-bit libraries in "lib32" and 64-bit libraries in "lib". Ross Ridge
Re: i386: Problems with references to import symbols.
Richard Henderson writes: >Dunno. One could also wait to expand *__imp_foo, for functions, >until expanding the function call. And then this variable would >receive the address of the import library thunk. > >What does VC++ do? It seems to always use *__imp_foo except when initializing a statically allocated variable in C. In that case it uses _foo, unless compiling with extensions disabled (/Za) in which case it generates a similar error as we do. In C++ it uses dynamic initialization like Dave Korn suggested. > I'm mostly wondering about what pointer equality guarantees we can make. It looks like MSC requires that you link with the static CRT libraries if you want strict standard conformance. Ross Ridge
Re: RFC: Enable __declspec for Linux/x86
Joe Buck write: >If the Windows version of GCC has to recognize __declspec to function >as a hosted compiler on Windows, then the work already needs to be done >to implement it. Well, I'm kinda surprised that Windows verision of GCC recognizes __declspec. The implementation is just a simple macro, and could've just as easily been implemented in a runtime header, as the MinGW runtime does. > So what's the harm in allowing it on other platforms? Probably none, but since the macro can be defined on the command line with "-D__declspec(x)=__attribute__((x))" defining it by default on other platforms is only a minor convenience. >If it makes it easier for Windows programmers to move to free compilers >and OSes, isn't that something that should be supported? I suppose that would argue for unconditionally defining the macro regardless of the platform. Ross Ridge
Re: Integer overflow in operator new
Joe Buck writes: >If a check were to be implemented, the right thing to do would be to throw >bad_alloc (for the default new) or return 0 (for the nothrow new). What do you do if the user has defined his own operator new that does something else? >There cases where the penalty for this check could have >an impact, like for pool allocators that are otherwise very cheap. >If so, there could be a flag to suppress the check. Excessive code size growth could also be problem for some programs. Ross Ridge
Re: Integer overflow in operator new
Joe Buck writes: >If a check were to be implemented, the right thing to do would be to throw >bad_alloc (for the default new) or return 0 (for the nothrow new). Ross Ridge writes: >What do you do if the user has defined his own operator new that does >something else? Gabriel Dos Reis writes: >More precisely? Well, for example, like all other things that a new_handler can do, like throwing an exception derived from bad_alloc or calling exit(). In addition, any number of side effects are possible, like printing error messages or setting flags. >Those programs willing to do anything to avoid imagined or perceived >"excessive code size growth" may use the suggested switch. The code size growth would be real, and there are enough applications out there that would consider any unnecessary growth in code excessive. The switch would be required both for that reason, and for Standard conformance. Ross Ridge
Re: Integer overflow in operator new
[EMAIL PROTECTED] (Ross Ridge) writes: > Well, for example, like all other things that a new_handler can do, > like throwing an exception derived from bad_alloc or calling exit(). > In addition, any number of side effects are possible, like printing > error messages or setting flags. Gabriel Dos Reis writes: >I believe you're confused about the semantics. >The issue here is that the *size of object* requested can be >represented. That is independent of whether the machine has enough >memory or not. So, new_handler is a red herring The issue is what GCC should do when the calculation of the size of memory to allocate with operator new() results in unsigned wrapping. Currently, GCC's behavior is standard conforming but probably isn't the expected result. If GCC does something other than what operator new() does when there isn't enough memory available then it will be doing something that is both non-conforming and probably not what was expected. Ross Ridge
Re: Integer overflow in operator new
Joe Buck writes: >Consider an implementation that, when given > >Foo* array_of_foo = new Foo[n_elements]; > >passes __compute_size(elements, sizeof Foo) instead of n_elements*sizeof Foo >to operator new, where __compute_size is > >inline size_t __compute_size(size_t num, size_t size) { >size_t product = num * size; >return product >= num ? product : ~size_t(0); >} Yes, doing something like this instead would largely answer my concerns. >This counts on the fact that any operator new implementation has to fail >when asked to supply every single addressible byte, less one. I don't know if you can assume "~size_t(0)" is equal to the number of addressable bytes, less one. A counter example would be 16-bit 80x86 compilers where size_t is 16-bits and an allocation of 65535 bytes can succeed, but I don't know if GCC supports any targets where something similar can happen. >I haven't memorized the standard, but I don't believe that this >implementation would violate it. The behavior differs only when more >memory is requested than can be delivered. It differs because the actual amount of memory requested is the result of the unsigned multiplication of "n_elements * sizeof Foo", using your example above. Since this result of this caclulation isn't undefined, even if it "overflows", there's no room for the compiler to calculate a different value to pass to operator new(). Ross Ridge
Re: Integer overflow in operator new
Joe Buck writes: > inline size_t __compute_size(size_t num, size_t size) { > size_t product = num * size; > return product >= num ? product : ~size_t(0); > } Florian Weimer writes: >I don't think this check is correct. Consider num = 0x3334 and >size = 6. It seems that the check is difficult to perform efficiently >unless the architecture provides unsigned multiplication with overflow >detection, or an instruction to implement __builtin_clz. This should work instead: inline size_t __compute_size(size_t num, size_t size) { if (num > ~size_t(0) / size) return ~size_t(0); return num * size; } Ross Ridge
Re: Integer overflow in operator new
Florian Weimer writes: >Yeah, but that division is fairly expensive if it can't be performed >at compile time. OTOH, if __compute_size is inlined in all places, >code size does increase somewhat. Well, I believe the assumption was that __compute_size would be inlined. If you want to minimize code size and avoid the division then a library function something like following might work: void *__allocate_array(size_t num, size_t size, size_t max_num) { if (num > max_num) size = ~size_t(0); else size *= num; return operator new[](size); } GCC would caclulate the constant "~size_t(0) / size" and pass it as the third argument. You'ld be trading a multiply for a couple of constant outgoing arguments, so the code growth should be small. Unfortunately, you'd be trading what in most cases is a fast shift and maybe add or two for slower multiply. So long as whatever switch is used to enable this check isn't on by default and its effect on code size and speed is documented, I don't think it matters that much what those effects are. Anything that works should make the people concerned about security happy. People more concerned with size or speed aren't going to enable this feature. Ross Ridge
Re: PING [4.1 regression, patch] build i686-pc-mingw32
>A thought occurs to me... we *know* how to build build-system >executables, like gen*.exe. Why can't we have small C programs that >know where as/ld are, know how to exec them portably (libiberty), etc? You already have a not-so-small C program that's supposed to know where as and ld are. Unfortunately, it seems to get this wrong is some case or another and thus these rules for "linking" the utilities into the build directory were added. Maybe it's the gcc front end that needed to be fixed, not the makefile. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: PING [4.1 regression, patch] build i686-pc-mingw32
Ross Ridge wrote: > You already have a not-so-small C program that's supposed to know > where as and ld are. DJ Delorie wrote: > You're forgetting about configure. I don't see how the existance of configure changes the fact the GCC compiler driver exists, is capable of running and as and ld, and is supposed to know where they are. It even does PATH-like searches. Why not just fix it so it runs the correct version of as and ld directly during the bootstrap process? Add an a "--use-bootstrap-binutils" flag or a "--with-ld=" flag to the compiler driver, that way the newly built compiler driver can directly execute the version of ld and as the current makefile hack is trying to force it into running. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: PING [4.1 regression, patch] build i686-pc-mingw32
Ross Ridge wrote: > I don't see how the existance of configure changes the fact the GCC > compiler driver exists, DJ Delorie wrote: > At the time you're running configure, the gcc driver does *not* exist, > but you *do* need to run as and ld to test what features they support, > information which is needed in order to *build* gcc. I don't see the relevence to problem at hand. The Makefile that contains the currect hack that's causing the problem doesn't exist at configure time either. If your proposed solution of creating new programs to execute as and ld were implemented then these new programs would also not exist at configure time. It's not a problem in configure that's causing the bootstrap failure, it's a bug in the Makefile. Ross Ridge
Re: PING [4.1 regression, patch] build i686-pc-mingw32
Ross Ridge wrote: >I don't see how the existance of configure changes the fact the GCC >compiler driver exists, DJ Delorie wrote: >At the time you're running configure, the gcc driver does *not* exist, >but you *do* need to run as and ld to test what features they support, >information which is needed in order to *build* gcc. Ross Ridge wrote: > I don't see the relevence to problem at hand. The Makefile that contains > the currect hack that's causing the problem doesn't exist at configure > time either. Mark Mitchell wrote: > Right. The real solution is to separate libgcc from the rest of the > compiler; you should be able to (a) use configure to detect features of > your as/ld, (b) build the compiler, (c) install it, and, only then, (d) > start building libraries. Sorry, but I don't see the relevence of this either. Are you and DJ Delorie trying to address some other problem then the fact that GCC doesn't bootstrap on MinGW builds? Ross Ridge
Re: [patch] Fix i386-mingw32 build failure
Christopher Faylor wrote: >The consensus seemed to be that this was not the way to fix the problem. The consensus also seemed to be that it was just an aspect of a larger problem that no good solution had been proposed to solve yet. >I suggested that modifying pex-* functions to understand #! scripts >might be the best way to deal with this. It seems to be a rather convoluted and complicated way of tricking the newly built compiler driver to run a specific version of a program. Especially since the #! hack currently used in the Makefile might get replaced by some other hack. Ross Ridge
Re: [patch] Fix i386-mingw32 build failure
DJ Delorie wrote: >Supporting #! in libiberty doesn't have to stop you from enhancing gcc >anyway ;-) Well, it's stopping a real fix for the MinGW build failure being made. Adding #! support to libiberty won't work because the problem scripts have MSYS/Cygwin paths for the shell (eg. "/bin/sh") that aren't likely to be valid to plain Windows. Ross Ridge
Re: [patch] Fix i386-mingw32 build failure
> DJGPP solves this thusly: > > /* If INTERP is a Unix-style pathname, like "/bin/sh", we will try > it with the usual extensions and, if that fails, will further > search for the basename of the shell along the PATH; this > allows to run Unix shell scripts without editing their first line. */ It's likely to work in this case, but it doesn't guarantee that the shell that gets executed uses the same MSYS/Cygwin enviroment as the rest of the build process. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: [patch] Fix i386-mingw32 build failure
> I am nearly ready to commit this patch but I went overboard and had it > search in mingw and MSYS locations for the program to run (i.e., > "/bin/sh"). Since there's no MinGW shell, there's no point in looking in there. Then it occurred to me that maybe this was a little too > "product specific" for something like libiberty. > > Or should I yank out that code and just search for "sh" along the path? If a PATH search doesn't turn up the shell then a desperate search through the registery isn't likely to turn up right shell. While in other contexts finding any shell might be better than nothing, in this context, fixing the MinGW build failure, the shell needs to be from the same "Unix emulation" environment that running the build process. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: [DEAD] APPEAL to steering committee: [Bug target/23605]memset() Optimization on x86-32 bit
Daniel Berlin wrote: > There is no guarantee that your bug will or won't be fixed for a > certain release, etc, unless *you* start submitting the patches to > fix it. Actually, there's no guarantee that even if you submit patches to fix a bug that it will be fixed in any official release. Ross Ridge
Re: Any plan to support Windows/x86-64?
> Is there any plan to support Windows/x86-64? I haven't heard of anyone wanting to work on such a port. > What are needed for the port? What you'ld need for any OS port. GCC needs to support the Windows x64 ABI, you need a suitable runtime library, and you need a suitable assembler and linker. I'm not sure how close the Windows x64 ABI is to the x86-64 ABI, but it seems to be a fair bit different. Porting MinGW should be the simplest way to get a suitable runtime library, though maybe you'ld want to use a newer version of the Microsoft C runtime libary. Using binutils as the assembler and linker is pretty much a given, but they'd need to be ported to support the Windows x64 PE32+ PECOFF, if they don't already. Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
Paul Eggert wrote: >So my question is: Is it a burden on GCC to require interpretation (B)? > >My understanding is that GCC already uses (B), and that the answer is >"no, it's no problem", but if I'm wrong please let me know. GCC doesn't use (A), (B) or (C). GCC doesn't conform to C99 and any implementation of "c99" that uses GCC would presumably also be non-conforming. Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
Ross Ridge wrote: > GCC doesn't use (A), (B) or (C). GCC doesn't conform to C99 and > any implementation of "c99" that uses GCC would presumably also be > non-conforming. Robert Dewar wrote: > What exactly is the observable non-conformance? GCC doesn't claim C99 conformance. The following URL lists a number of different areas in which GCC is known not to conform: http://gcc.gnu.org/c99status.html Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: proposed Opengroup action for c99 command (XCU ERN 76)
Ross Ridge wrote: > GCC doesn't use (A), (B) or (C). GCC doesn't conform to C99 and > any implementation of "c99" that uses GCC would presumably also be > non-conforming. Robert Dewar wrote: > What exactly is the observable non-conformance? Ross Ridge wrote: > GCC doesn't claim C99 conformance. The following URL lists a number of > different areas in which GCC is known not to conform: > > http://gcc.gnu.org/c99status.html Ian Lance Taylor wrote: > How does the fact that gcc does not currently conform to C99 imply > that gcc doesn't use (A), (B), or (C)? It doesn't, the implication is the other way around. But if you're asking for "observable non-conformance" then there are lot more obvious ways to observe it than by showing that GCC doesn't use (A), (B) or (C). > In any case, the general goal is to conform to C99, so it still makes > sense to discuss this. Maybe, but there's no implementation of (A) or (C) in GCC that would make requiring (B) a burden. So I think Paul Eggert's question has been answered. Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
> I was not asking the general question, I was asking how it fails > to conform wrt the particular technical issue at hand. Since GCC doesn't have any code that does (A), (B), or (C) it doesn't place a burden on GCC to require it to do (B). That's sufficient to answer the techinical issue at hand. While that implies GCC doesn't conform, I said so explictly because Paul Eggert said that c99 is often implemented using GCC. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: proposed Opengroup action for c99 command (XCU ERN 76)
> > A. Convert everything to UCNs in basic source characters as soon > > as possible, that is, in translation phase 1. (This is what > > C++ requires, apparently.) > > > > B. Use native encodings where possible, UCNs otherwise. > > > > C. Convert everything to wide characters as soon as possible > > using an internal encoding that encompasses the entire source > > character set and all UCNs. > > Now, see libcpp/charset.c. See the -finput-charset= option. To me > that looks like code which does something related to (A), (B), or (C). Well, maybe I'm missing something, but it never converts input characters to UCNs so that means it doesn't do (A) or (B), and the only thing it converts to wide characters are wide string literals, so it doesn't do (C). Hmm... maybe it's doing (B) if UCNs aren't ever necessary, though it doesn't seem to support extended characters in indentifiers so it's not quite using native encodings where possible. Though the intent does seem to be to go with option (B). Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
> You are thinking operationally, when you should think semantically. > Remember that as-if applies here. The rules as stated give ways to > achieve certain effects, the question is not whether we are following > the operational rules, but whether we are following the effects. Thinking semantically is irrelevent because the question isn't whether GCC conforms to C99 or POSIX. It clearly doesn't. GCC fails the as-if rule. The question is one of implementation burden, which can only be answered by examining GCC's implementation. > Indeed I am not sure I understand that the three options are in fact > distinct semantically. The aren't in C99, as Paul Eggert's original message made clear, but they are in an environment that defines a command like "c99" that makes preprocessed output visable. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: proposed Opengroup action for c99 command (XCU ERN 76)
Ross Ridge wrote: > Thinking semantically is irrelevent because the question isn't whether GCC > conforms to C99 or POSIX. It clearly doesn't. GCC fails the as-if rule. > The question is one of implementation burden, which can only be answered > by examining GCC's implementation. Robert Dewar wrote: > Once again we are not discussing general conformance here, just > conformance on this specific issue. Wrong. The question that was asked and I answered was about implementation burden, not conformance. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: proposed Opengroup action for c99 command (XCU ERN 76)
Joe Buck writes: > To me, even a 1% performance hit to fix this would be excessive. I think any performance hit to support UCNs or extended characters outside of strings and comments is undesirable. GCC should have an option like "-trigraphs" for the few programs that need this feature. Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
Paul Eggert writes: > Would this weaker action pose an undue burden on GCC? My sense from > the discussion is "no", but I'd like to double-check with the experts I'd say "no", but I think the experts might see it as posing no burden at all on GCC. The burden would be on whomever wants to use GCC to implement the c99 command. Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
>The POSIXy way to do that would be to refer to the LC_CHARSET >environment variable, but then consider > >LC_CHARSET=UTF-16 c99 foo.c > >where 'foo.c' is in UTF-16 and contains '#include ', Not really a problem for a number of reasons. First, it's LC_CTYPE you're thinking of. Second, the narrow character set can only be 16-bits wide if "char" is 16-bits. Thirdly, if the character set that LC_CTYPE selects isn't superset of the POSIX portable character set then result is undefined. So if happens to writen using characters only from C basic character set (which is a subset of the portable character set), there isn't a problem. Ross Ridge
Re: proposed Opengroup action for c99 command (XCU ERN 76)
> > Not really a problem for a number of reasons. First, it's LC_CTYPE > > you're thinking of. Second, the narrow character set can only be 16-bits > > wide if "char" is 16-bits. Thirdly, if the character set that LC_CTYPE > > selects isn't superset of the POSIX portable character set then result > > is undefined. So if happens to writen using characters only > > from C basic character set (which is a subset of the portable character > > set), there isn't a problem. > > My concern would be that it would be written using only characters > "from the basic C character set", but the actual byte values that > represent those characters might differ. For instance, if POSIX > allows LC_CTYPE=EBCDIC (or equivalent). If setting LC_CTYPE=EBCDIC selects a character set that is a superset of the POSIX portable character set on that POSIX system then there isn't a problem. If it selects a character set that isn't a superset, say because the POSIX portable character set is encoded using ASCII on that system, then the effect is undefined, and so there's also isn't a problem. Ross Ridge -- l/ // Ross Ridge -- The Great HTMU [oo][oo] [EMAIL PROTECTED] -()-/()/ http://www.csclub.uwaterloo.ca/u/rridge/ db //
Re: Question on i386 stack adjustment optimization
>I have a question. It is OK to turn stack pointer addition into >pop instructions with a scratch register. But I don't see how you can >turn stack pointer substraction into push instructions with a >scratch register since push will change the contents of the stack, >in addition to stack pointer. On i386 platforms the contents of the stack aren't changed, because the memory below where the stack pointer points to aren't part of the stack, so this optimzation is safe. However, on x86_64 ABI platforms there's 128 byte "red zone" below where the stack pointer points to that is considered part of the stack and since GCC is using that red zone to "push' values on the stack, this optimization isn't safe. Ross Ridge
Re: Add crc32 function to libiberty
DJ Delorie writes: >I didn't reference the web site for the polynomial, just for background. >To be honest, I'm not sure what the polynomial is. As the comments >explain, the algorithm I used is precisely taken from gdb, in remote.c, >and is intended to produce the same result. Does anybody on the gdb >side know the polynomial or any other information? Your code uses the (one and only) CRC-32 polynomial 0x04c11db7, so just describing it as the "CRC-32" function should be sufficient documentation. It's the same CRC function as used by PKZIP, Ethernet, and chksum. It's not compatible with the Intel CRC32 instruction which uses the CRC-32C polynomial (0x1EDC6F41). Ross Ridge
Re: MSVC hook function prologue
Paolo Bonzini writes: >Are there non-Microsoft DLLs that expect to be hooked this way? If >so, I think the patch is interesting for gcc independent of whether it >is useful for Wine. Stefan Dösinger writes: >I haven't seen any so far. ... If this patch is essentially only for one application, maybe the idea of implementing a more generally useful naked attribute would be the way to go. I implemented a naked attribute in my private sources to do something similar, although supporting hookable prologues was just a small part of its more general use in supporting an assembler based API. Ross Ridge
Re: CVS/SVN binutils and gcc on MacOS X?
Stefan Dösinger writes: >Unfortunately I need support for the swap suffix in as, so using the system >binaries is not an option. Is the only thing I can do to find the source of >the as version, backport the swap suffix and hope for the best? Another option might be a hack like this: (define_insn "vswapmov" [(set (match_operand:SI 0 "register_operand" "=r") (match_operand:SI 1 "register_operand" "r")) (unspec_volatile [(const_int 0)] UNSPECV_VSWAPMOV)] "" { #ifdef HAVE_AS_IX86_SWAP return "movl.s\t{%1, %0|%0, %1}"; #else if (true_regnum(operand[0]) == DI_REG && true_regnum(operand[1]) == DI_REG) return ASM_BYTE "0x8B, 0xFF"; if (true_regnum(operand[0]) == BP_REG && true_regnum(operand[1]) == SP_REG) return ASM_BYTE "0x8B, 0xEC"; gcc_unreachable(); #endif } [(set_attr "length" "2") (set_attr "length_immediate" "0") (set_attr "modrm" "0")]) It's not pretty but you won't be dependent on binutils. Ross Ridge
Re: MSVC hook function prologue
Paolo Bonzini writes: >The naked attribute has been proposed and bashed to death multiple >times on the GCC list too. No, not really. It's been proposed a few times, but the discussion never gets anywhere because the i386 maintainers quickly put their foot down and end it. That hasn't stopped other ports from implementing a naked attribute or for that matter developers like me creating their own private implementations. Ross Ridge
Re: Add support for the Win32 hook prologue (try 3)
Stefan Dösinger writes: >On a partly related topic, I think the Win64 ABI requires that the first >function is two bytes long, and there at least 6 bytes of slack before >the function. Does gcc implement that? As far as I can tell the Win64 ABI doesn't have either of these requirements. Microsoft's compiler certainly doesn't guarantee that functions begin with two byte instructions, and the "x64 Software Conventions" document gives examples of prologues with larger initial instructions: http://msdn.microsoft.com/en-us/library/tawsa7cb(VS.80).aspx Mind you, last I checked, GCC didn't actually follow the ABI requirements for prologues and epilogues given in the link above, but that only breaks ABI unwinding. Ross Ridge
Re: dg-error vs. i18n?
Eric Blake writes: >The correct workaround is indeed to specify a locale with specific charset >encodings, rather than relying on plain "C" (hopefully cygwin will >support "C.ASCII", if it does not already). The correct fix is for GCC not to intentionally choose to rely on implementation defined behaviour when using the "C" locale. GCC can't portably assume any other locale exists, but can portibly and easily choose to get consistant output when using the "C" locale. >As far as I know, the hole is intentional. But if others would like >me to, I am willing to pursue the action of raising a defect against >the POSIX standard, requesting that the next version of POSIX consider >including a standardized name for a locale with guaranteed single-byte >encoding. I don't see how a defect in POSIX is exposed here. Nothing in the standard forced GCC to output multi-byte characters when nl_langinfo(CHARSET) returns something like "utf-8". GCC chould just as easily have choosen to output these quotes as single-byte characters when nl_langinfo(CHARSET) returns something like "windows-1252", or some other non-ASCII single-byte characters when it returned "iso-8859-1". Ross Ridge
Re: dg-error vs. i18n?
Ross Ridge wrote: > The correct fix is for GCC not to intentionally choose to rely on > implementation defined behaviour when using the "C" locale. GCC can't > portably assume any other locale exists, but can portibly and easily > choose to get consistant output when using the "C" locale. Joseph S. Myers writes: >GCC is behaving properly according to the user's locale (representing >English-language diagnostics as best it can - remember that ASCII does not >allow good representation of English in all cases). This is an issue of style, but as I far as I'm concerned using these fancy quotes in English locales is unnecessary and unhelpful. >The problem here is not a bug in the compiler proper, it is an issue >with how to test the compiler portably - that is, how the testsuite can >portably set a locale with English language and ASCII character set in >order to test the output the compiler gives in such a locale. It's a design flaw in GCC. The "C" locale is the only locale that GCC can use to reliably and portably get consistant output across all ASCII systems and so should be the locale used to achieve consistant output. GCC can simply choose to restrict it's output to ASCII. It's not in any way being forced by POSIX to output non-ASCII characters, or for that matter to treat the "C" locale as an English locale. Ross Ridge
Re: [PATCH][GIT PULL][v2.6.32] tracing/x86: Add check to detect GCC messing with mcount prologue
Andrew Haley writes: >Alright. So, it is possible in theory for gcc to generate code that >only uses -maccumulate-outgoing-args when it needs to realign SP. >And, therefore, we could have a nice option for the kernel: one with >(mostly) good code density and never generates the bizarre code >sequence in the prologue. The best option would be for the Linux people to fix the underlying problem in their kernel sources. If the code no longer requested that certain automatic variables be aligned, then not only would this bizarre code sequence not be emitted, the unnecessary stack alignment would disapear as well. The kernel would then be free to choose to use whatever code generation options it felt was appropriate. Ross Ridge
Re: Why not contribute? (to GCC)
Manuel López-Ibáñez writes: >What reasons keep you from contributing to GCC? The big reason the copyright assignment. I never even bothered to read it, but as I don't get anything in return there's no point. Why should put obligaitons on myself, open myself up to even unlikely liabilities, just so my patches can merged into the official source distribution? I work on software on my own time to solve my own problems. I'm happy enough not "horde" it and give it away for "free", but it doesn't make much difference to me if anyone else actually ends up using it. I can have my own patched version of GCC that does what I want without signing anything. Another reason is the poor patch submission process. Why e-mail a patch if I know, as a new contributor, there's a good chance it won't even be looked at by anyone? Why would I want to go through I a process where I'm expected to keep begging until my patch finally gets someone's attention? I also just don't need the abuse. GCC, while not the most of hostile of open source projects out there, it's up there. Manuel López-Ibáñez's unjustified hostility towards Michael Witten in this thread is just a small example. Finally, it's also a lot of work. Just building GCC can be pain, having to find upto date versions of a growing list of math libraries that don't benefit me in the slightest way. Running the test suite takes a long time, so even trivial patches require a non-trivial amount of work. Anything more serious can take a huge ammount of time. I've abandonded projects once I realized it would be lot quicker to find some other solution like using assembly, rather than trying to get GCC to do what I wanted it to do. Now these are just the main reasons why I don't contribute to GCC. I'm not arguing that any these issues need to be or can be fixed. If I had what I thought where good solutions that would be better overall to GCC then I'd have suggested them long ago. I will add, that I don't think code quality is a problem with GCC. I hate the GNU coding style as much as anyone, but it's used consistantly and that's what matters. Compared other open and closed projects I've seen it's as easy to understand and maintain as anything. GNU binutils is a pile of poo, but I don't know of any codebase the size of GCC that's as nice to work with. Ross Ridge
Re: Problem with x64 SEH macro implementation for ReactOS
Timo Kreuzer wrote: >I am working on x64 SEH for ReactOS. The idea is to use .cfi_escape >codes to mark the code positions where the try block starts / ends and >of the except landing pad. The emitted .eh_frame section is parsed after >linking and converted into Windows compatible unwind info / scope tables. >This works quite well so far. Richard Henderson writes: >I always imagined that if someone wanted to do SEH, they'd actually >implement it within GCC, rather than post-processing it like this. >Surely you're making things too hard for yourself with these escape hacks I assume he's trying to create the equivilent of the existing macro's for handling Windows structured exceptions in 32-bit code. The 32-bit macros don't require any post-processing and are fairly simple. Still even with the post-processing, Timo Kreuzer's solution would be heck of a lot easier to implement then adding SEH support to GCC. The big problem is that the last time I checked GCC wasn't generating the Windows x64 ABI required prologue, epilogues or unwind info for functions. Windows won't be able to unwind through GCC compiled functions whether the macros are used or not. I think the solution to the specific problem he mentioned, connecting nested functions to their try blocks, would be to emit address pairs to a special section. Ross Ridge
Re: Problem with x64 SEH macro implementation for ReactOS
Kai Tietz writes: >I am very interested to learn, which parts in calling convention aren't >implemented for w64? Well, maybe I'm missing something but I can't any see code in GCC for generating prologues, epilogues and unwind tables in the format required by the Windows x64 ABI. http://msdn.microsoft.com/en-us/library/tawsa7cb.aspx >I am a bit curious, as I found that the unwind mechanism of Windows >itself working quite well on gcc compiled code, so I assumed, that the >most important parts of its calling convention are implemented. How exactly are you testing this? Without SEH support Windows wouldn't ordinarily ever need to unwind through GCC compiled code. I assumed that's why it was never implemented. Ross Ridge
Re: Problem with x64 SEH macro implementation for ReactOS
Kai Tietz writes: >Well, you mean the SEH tables on stack. No, I mean the ABI required unwind information. > Well, those aren't implemented (as they aren't for 32-bit). 64-bit SEH handling is completely different from 32-bit SEH handling. In the 64-bit Windows ABI exceptions are handled using unwind tables similar in concept to DWARF2 exceptions. There are no SEH tables on the stack. In the 32-bit ABI exceptions are handled using a linked list of records on the stack, similar to SJLJ exceptions. > But the the unwinding via RtlUnwind and RtlUnwindEx do their job even >for gcc compiled code quite well I don't see how it would be possible in the general case. Without the unwind talbes Windows doesn't have the required information to unwind through GCC compiled functions. Ross Ridge
Re: Problem with x64 SEH macro implementation for ReactOS
Kai Tietz writes: >Hmm, yes and no. First the exception handler uses the .pdata and .xdata >section for checking throws. But there is still the stack based exception >mechanism as for 32-bit IIRC. No. The mechanism is completely different. The whole point of the unwind tables is to remove the overhead of maintaining the linked list of records on the stack. It works just like DWARF2 exceptions in this respect. >No, this isn't that curious as you mean. In the link you sent me, it is >explained. The exception handler tables (.pdata/.xdata) are optional and >not necessarily required. This is what Microsoft's documentation says: Every function that allocates stack space, calls other functions, saves nonvolatile registers, or uses exception handling must have a prolog whose address limits are described in the unwind data associated with the respective function table entry. In this very limited case RtlUnwindEx() can indeed unwind a function without it having any unwind info associated with it. If RtlUnwindEx() can't find the unwind data for a function then it assumes that the stack pointer points directly at the return address. To unwind through the function it pops the top of stack to get the next frame's RIP and RSP values. Otherwise RltUnwindEx() needs the unwind information. The restrictions on the format of prologue and epilogue only exist to making handle the case where the current RIP points to the prologue or epilogue much easier. Without the unwind info RtlUnwindEx() has no way of knowing where the prologue is. There's a very detailed explaination on how Windows x64 exceptions work, including RltUnwindEx() on this blog: http://www.nynaeve.net/?p=113 >But in general I agree, that the generation of .pdata/.xdata sections >would be a good thing for better support of MS abis by gcc. I'm not advocating that they should be added to GCC now. I'm just pointing out that without them 64-bit SEH macros will be of limitted use. Ross Ridge
Re: Problem with x64 SEH macro implementation for ReactOS
Timo Kreuzer writes: >The problem of the missing Windows x64 unwind tables is already solved! >I generate them from DWARF2 unwind info in a postprocessing step. Ok. The Windows unwind opcodes seemed to be much more limitted than DWARF2's that I wouldn't have thought this approach would work. >The problem that I have is simply how to mark the try / except / finally >blocks in the code with reference to each other, so I can also generate >the SCOPE_TABLE data in that post processing step You can output address pairs to a special section to get the mapping you need. Something like: asm(".section .seh.data, \"n\"\n\t" ".quad %0, %1\n\t" ".text" : : "i" (&addr1), "i" (&addr2)); Unfortunately, I don't think section stack directives work on PE-COFF targets, so you'd have to assume the function was using the .text section. btw. don't rely on GCC putting adjacent asm statements together like you did in your orignal message. Make them a single asm statement. Note that the SCOPE_TABLE structure is part of Microsoft's internal private SEH implementation. I don't think it's a good idea to use or copy Microsoft's implementation. Create your own handler function and give it whatever data you need. Ross Ridge
Re: GCC & OpenCL ?
Mark Mitchell writes: >That's correct. I was envisioning a proper compiler that would take >OpenCL input and generate binary output, for a particular target, just >as with all other GCC input languages. That target might be a GPU, or >it might be a multi-core CPU, or it might be a single-core CPU. I have a hard time seeing why this would be all that worthwhile. Since the instruction sets for AMD, NVIDIA or current Intel GPUs are trade scretes, GCC won't be able to generate binary output for them. OpenCL is designed for heterogenous systems, compiling for multi-core or single-core CPUs would only be useful as a cheap fallback implementation. This limits a GCC-based OpenGL implementation to achieving it's primary purpose with just Cell processors and maybe Intel's Larrabee. Is that what you envision? Without AMD/NVIDIA GPU support it doesn't sound all that useful to me. Ross Ridge
Re: GCC & OpenCL ?
Basile STARYNKEVITCH writes: >It seems to me that some specifications >seems to be available. I am not a GPU expert, but >http://developer.amd.com/documentation/guides/Pages/default.aspx >contains a R8xx Family Instruction Set Archictectire document at >http://developer.amd.com/gpu_assets/r600isa.pdf and at a very quick >first glance (perhaps wrongly) I feel that it could be enough to design & >write a code generator for it. Oh, ok, that makes a world of difference. Even with just AMD GPU support a GCC-based OpenCL implementation becomes a lot more practical. Ross Ridge
Re: GCC & OpenCL ?
Ross Ridge wrote: > Oh, ok, that makes a world of difference. Even with just AMD GPU > support a GCC-based OpenCL implementation becomes a lot more practical. Michael Meissner writes: >And bear in mind that x86's with GPUs are not the only platform of interest I never said anything about x86's and I already mentioned the Cell. Regardless, I don't think an GCC-based OpenCL implementation that didn't target GPUs would be that useful. Ross Ridge
Re: Ideas for Google Summer of Code
Paolo Bonzini writes: >Regarding the NVIDIA GPU backend, I think NVIDIA is not yet distributing >details about the instruction set unlike ATI, is it? In this case, I >think ATI would be a better match. I think a GPU backend would be well beyond the scope of a Summer of Code project. GPUs don't have normal stacks and addressing support is limitted. >Another possibility is to analyze OpenCL C and try to integrate its >features in GCC as much as possible. This would include > >1) masking/swizzling support for GCC's "generic vector" language extension; A project that started and ended here would give GCC, in particular GCC's Cell SPU port, the only major required functionality in the OpenCL language, outside the runtime, that GCC is missing. >2) half-precision floats; Do you mean just conversion only support, like Sandra Loosemore's proposed ARM patch, or full arithmetic support like any other scalar or vector type? Ross Ridge
Re: Ideas for Google Summer of Code
Joe Buck writes: >I'm having trouble finding that document, I don't see a link to it >on that page. Maybe I'm missing something obvious? Sticking "nvidia ptx" into Google turned up this document: http://www.nvidia.com/object/io_1195170102263.html It's an intermediate language, so isn't tied to any particular NVIDIA GPU. I beleive there's something similar for AMD/ATI GPUs. btw. The computational power of Intel's integrated GPUs is pretty dismal, so I don't think GCC port targetting them would be very useful. Ross Ridge
Re: Why not contribute? (to GCC)
Alfred M. Szmidt writes: >You are still open to liabilities for your own project, if you >incorporate code that you do not have copyright over, the original >copyright holder can still sue you That's irrlevent. By signing the FSF's document I'd be doing nothing to reduce anyone's ability to sue me, I could only be increasing them. And please don't try to argue that's not true, because I have no reason to believe you. Only a lawyer working for myself would be in a position to convince me otherwise, but if I have to go that far, it's clearly not worth it. The debate over legalities has already derailed this thread, so let me try to put it another way. Years ago, I was asked to sign one of these documents for some public domain code I wrote that I never intended to become part of a FSF project. Someone wanted to turn it a regular GNU project with a GPL license, configure scripts, a cute acronym and all that stuff. I said no. It's public domain, take it or leave it. Why I should I sign some legally binding document for some code I had in effect already donated to the public? How would you feel if some charity you donated money to came back with a piece of paper for you to sign? Submitting a hypothetical patch to GCC isn't much different to me. For some people having their code in the GCC distribution is worth something. For me it's not. For them it's a fair trade. For me it's a donation. >We are all humans, patches fall through the cracks. Would you like to >help keeping an eye out for patches that have fallen through? Would >anyone else like to do this? As I said, I was just listing the reasons why I don't contribute. I'm not arguing that anything should be changed or can be changed. However, what I do know is that excuses won't make me or anyone else more likely to contribute to GCC. >Please refer to GCC as a free software project, it was written by the >GNU project and the free software community. Oh, yah, forgot about that one. Political stuff like this another reason not to get involved with GCC. Ross Ridge
Re: Why not contribute? (to GCC)
Ross Ridge writes: > Years ago, I was asked to sign one of these documents for some public > domain code I wrote that I never intended to become part of a FSF project. > Someone wanted to turn it a regular GNU project with a GPL license, > configure scripts, a cute acronym and all that stuff. I said no. > It's public domain, take it or leave it. Why I should I sign some > legally binding document for some code I had in effect already donated > to the public? Richard Kenner writes: > Because that's the only way to PUT something in the public domain! That's absurd and beside the point. >> How would you feel if some charity you donated money to came back >> with a piece of paper for you to sign? > >A closer analogy: a charity receives an unsolicited script for a play from >you. No, that's not a closer analogy. As I said, I never intended for my code to become part of an FSF project. I didn't send them anything unsolicited. I'm contributing to this thread solely to answer the question asked. Either take the time to read what I've written and use it try to understand why I don't and others might not contribute to GCC, or please just ignore it. Your unsubstantiated and irrelevent legal opinions aren't helping. Ross Ridge
Re: Inconsistency in ix86_binary_operator_ok?
>My confusion is that these functions currently allow arithmetic >operations of the form "reg = op(mem,immed)" even though this >shape isn't supported by x86 ISA. The IMUL instruction can have this form. Ross Ridge
Re: Problem with pex-win32.c
Mark Mitchell wrote: >The new pex-win32.c code doesn't operate correctly when used for >MinGW-hosted tools invoked from a Cygwin window. In particular, process >creation ("gcc" invoking "as", say) results in a DOS console window >popping up. When invoked from a DOS window, things are fine. What sort of "Cygwin window" are you talking about? Cygwin bash running in a normal Windows console window, or an rxvt or xterm window? >The reason for this problem is that the current code uses MSVCRT's spawn >to create the process, and that function doesn't provide fine enough >control. You have to use CreateProcess to make this work as desired. >When invoking CreateProcess, you must set bInheritHandle to TRUE and >pass a long a STARTUPINFO structure with dwFlags set to >STARTF_USESTDHANDLES, and the various hStd* handle fields set to the >values from the calling process. Setting bInheritHandle to TRUE causes the newly created process to inherit all of its parent process's inheritable handles. There's no point in using the STARTUPINFO structure to specify handles for stdin, stdout, and/or stderr unless you're changing them. Since MSVCRT's spawn sets bInheritHandle to TRUE, I don't see what difference this would make. I think you're barking up the wrong tree here. Windows only creates console windows automagically when a console application starts that can't inherit its parent's console window. Either gcc is somehow losing it's console window or it never had one to begin with. Hmm... if that "#!" kludge is being used here then it could be the shell that's losing the console. Ross Ridge
Re: Problem with pex-win32.c
Ross Ridge wrote: > I think you're barking up the wrong tree here. Windows only creates > console windows automagically when a console application starts that > can't inherit its parent's console window. Mark Mitchell wrote: > Exactly -- there is no parent console window here. Why isn't there a console window? The Cygwin version of rxvt, and I think xterm, creates (and keeps hidden) a console window that should be inherited by gcc and as. If there wasn't console window for gcc to inherit, why didn't Windows create one for gcc? Maybe gcc was linked with -mwindows. That would cause gcc not have console window, neither an inherited one nor an automagically created one, and so one would have be created for the invoked tools. > Hence, we need the child (which is a console application) created > without a console window, but with its standard input/output connected > to its parent. More precisely, the child should inherit its standard input and output handles from its parent. >Empirically, if you do not set CREATE_NO_WINDOW dwCreationFlags, a >console appears, which is undesirable. Using CREATE_NO_WINDOW really isn't an option here because it's not supported by Windows 9x/ME. > However, if you do set CREATE_NO_WINDOW, but do not set the standard >handles in the STARTUPINFO structure, the child's standard output/error do >not appear anywhere; perhaps they are closed because there is no console. With CREATE_NO_WINDOW the new process has no attached console. Like a -mwindows application, it neither inherits one, nor is one created for it. Since a process can't use a console other than the one attached to it, there's nowhere for the output to go to. Ross Ridge
Re: Problem with pex-win32.c
Mark Mitchell wrote: >Cygwin Xterm > >parent spawn: Pops up DOS window. >parent nostd: No output from child. >parent std: Works. > >DOS Console >=== >parent spawn: Works. >parent nostd: No output from child. >parent std: No output from child. This is what I got using your code and Cygwin rxvt: Cygwin rxvt === parent spawn: Works. parent nostd: No output from child. parent std: Works. I wasn't able to test it with xterm, I don't have an X server handy, but it looks your problem is with xterm, not gcc. Ross Ridge
Re: Problem with pex-win32.c
>Here is a sample program which does the right thing (no spurious console >windows, all output visible) when run either from a console or from a >console-free environment, such as a Cygwin xterm. This is the code >we'll be working into libiberty -- unless someone has a better solution! The potential problem I see is in all the code you haven't written yet. There's a fair amount of work that the MSVCRT library does that isn't reflected in your code, like PATH searching, adding filename suffixes (eg. ".EXE"), and converting the argv array into a command tail. It even uses undocumented parameters to CreateProces() to pass file descriptor flags to the new process. While you can probably ignore the later behaviour, subtle differences in the other behaviours could cause problems. Is this really worth it? Could this whole problem be solved by you switching to rxvt? Maybe the only problem is that your xterm is broken. Ross Ridge
Re: Problem with pex-win32.c
Dave Korn writes: >I don't understand why you think Mark's code needs to search the PATH or >append '.exe', when it invokes CreateProcess that does all that for you? I've already answered that question: "subtle differences in the other behaviours could cause problems." The search behaviour and extension handling of CreateProcess() is actually quite a bit different than of MSVCRT's spawn functions. Also, because of the way he uses CreateProcess(), Mark's code as it is now won't search the PATH. >> Is this really worth it? Could this whole problem be solved by you >> switching to rxvt? Maybe the only problem is that your xterm is broken. > > Nothing is "broken". The problem is that Cygwin applications run in >a slightly special environment, where there may not be a console attached >to the shell window. Arguably, not having a console window attached a shell window is broken in the Cygwin environment. >This is not a problem for cygwin apps, but it can be for non-cygwin-aware >apps launched from inside cygwin's 'special' environment that may assume >that the standard win32 assumptions hold. So, in general you can't expect any Win32 console application to work correctly in such a enviroment. Why should Mark expect a Win32 console version gcc to be any different? Hmm... maybe that's best solution, Mark should be using a "native" Cygwin version of gcc and tools. Ross Ridge
Re: Problem with pex-win32.c
Ross Ridge wrote: > Arguably, not having a console window attached a shell window is broken > in the Cygwin environment. Paul Brook wrote: > How exactly do you suggest implementing this? The same way Cygwin rxvt implements this. >By implication you're saying that you shouldn't able to use gcc from any GUI >environment. Cygwin isn't any different to any other process (eg. Eclipse) >that want to run and capture the output of "commandline" applications. No, the implication is that I shouldn't expect use in gcc in any GUI enviroment that doesn't create a hidden console window to run console applications with. Fortunately for me, my favourite GUI development enviroment, GNU Emacs, does just that on Windows. I just tested MinGW gcc with Visual Studio 2005 Express, and it gets it right too. No console windows popping up, no output lost. I don't know about Eclipse, but other IDEs and kitchen sink editors, like Code::Blocks, Dev-C++ and gvim seem to work fine other people. If Eclipse doesn't work as well with MinGW and every other Win32 command line compiler or tool, then arguably Eclipse is broken too. Ross Ridge
Re: FSF Policy re. inclusion of source code from other projects in GCC
Richard Guenther wrote: > /* > * > * Copyright (C) 1993 by Sun Microsystems, Inc. All rights reserved. > * > * Developed at SunPro, a Sun Microsystems, Inc. business. > * Permission to use, copy, modify, and distribute this > * software is freely granted, provided that this notice > * is preserved. > * > */ Mark Mitchell wrote: >My guess is that it's OK to include the Sun code, since it's in the >public domain. This may just be nit-picking, but the above notice doesn't put the code into the public domain. Sun still owns the copyright of the software. Ross Ridge
Re: Toolchain relocation
Dave Murphy wrote: > install: e:/devkitPro/devkitARM/lib/gcc/arm-elf/4.1.0/ Don't use a --prefix with a drive letter. Just use --prefix=/devkitARM, and then use "make install DESTDIR=e:/devkitPro" to install it where you actually want it. Ross Ridge
Re: Crossed-Native Builds, Toolchain Relocation and MinGW
Dave Murphy wrote: >When a mingw gcc toolchain is built on a linux host then it cannot find >it's headers or it's associated tools when running from a cmd shell on >the windows host. This can be worked around by using the msys shell to >provide a mount point identical to the configured prefix but isn't ideal. MinGW GCC is a native Win32 application and is unaffected by any mounts you create with MSYS. Ross Ridge
Re: Crossed-Native Builds, Toolchain Relocation and MinGW
Ross Ridge wrote: >MinGW GCC is a native Win32 application and is unaffected by any mounts >you create with MSYS. Dave Murphy wrote: >It's affected when you run from the msys bash shell, my apologies for >not being clear. That makes no difference. MinGW GCC is a native Win32 application and can't see any mounts you create with MSYS. Ross Ridge
Re: Crossed-Native Builds, Toolchain Relocation and MinGW
Ross Ridge wrote: >That makes no difference. MinGW GCC is a native Win32 application and >can't see any mounts you create with MSYS. Dave Murphy wrote: >sorry but you're most definitely wrong about that No, I'm not. The example you gave shows how MSYS mounts have an effect on the MSYS shell, which is not a native Win32 application. The "toolchain relocation" code in MinGW GCC is unaffected by MSYS mounts you might create, and so providing "a mount point identical to the configured prefix" won't have any effect. Ross Ridge
Re: Crossed-Native Builds, Toolchain Relocation and MinGW
Ranjit Mathew wrote: >In the problematic case, GCC is able to find "cc1" but not "as" ... ... >Note also that GCC's programme search path does not include >its own location for some reason: ... >d:/MiscAppz/MinGW/lib/gcc/i686-pc-mingw32/4.1.0/../../../../i686-pc-mingw32/bin/ This is the directory ("i686-pc-mingw32/bin") where you should install the version of as.exe and ld.exe you want that installation of gcc to use. Ross Ridge
Re: Segment registers support for i386
Remy Saissy wrote: >I don't really understand how to access the attribute set up in the >tree in the rtx struct. You won't be able to. You're going to need to write your own code that, during the conversion of the tree to RTL, creates RTL expressions which indicate that the memory references use segment registers. This probably won't be easy since there are a lot of contexts where your "far" pointer can be used. I suspect this is where you're going to give up on your project, but if you do then RTL expressions you'll need to create should probably look like: (mem:SI (plus:SI (unspec:SI [(reg:HI fs)] SEGREF) (reg:SI var))) After getting GCC to generate expressions like these, then it's a realtively simple case of modifying ix86_decompose_address() to handle the unspec. You might also need to change other backend code for handling addresses. Ross Ridge
Re: Segment registers support for i386
Remy Saissy wrote: >if I understand well, to make gcc generating rtx according to an >__attribute__((far("fs"))) on a pointer I only have to add or modify >rtx in the i386.md file and add an UNSPEC among the constants ? No, the work you need to on the backend, adding an UNSPEC constant to "i386.md" and writing code to handle the UNSPEC in "i386.c" is just the easy part. > What I understand is that there is two kind of managment for attribute : Attributes are handled in various different ways depending on what the attribute does. To handle your case correctly, you'ld have to change how the tree to RTL conversion generates RTL addresses expressions whenever a pointer with the "far" attribute is dereferenced. This is probably going to be a lot work. > Therefore, I can consider the following relationship: > (mem:SI (plus:SI (unspec:SI [(reg:HI fs)] SEGREF) (reg:SI var))) > | ||| > \/ \/ \/ \/ > int * __attribute__((far("fs"))) p; No, that's not what the RTL expression represents. Declarations aren't represented in RTL. The example RTL expression I gave is just an expression, not a full RTL instruction. It's something that could be used as the memory operand of an instruction. The RTL expression I gave would correspond to a C expression (not a statement) like this: *(int * __atribute__((far("fs" var > does (reg:HI fs) care about the type of the parameter fs ? See the GCC Internals documentation. In my example, since I don't know what the actual hard register number you assigned to the FS segment register, I just put "fs" in the place where the actual register number would appear. Similarily, the "var" in "(reg:SI var)" represents the number of the pseudo-register GCC would allocate for an automatic variable named "var". > how does gcc recognize such an expression ? Since this expression is a memory operand, it's recognized by the GO_IF_LEGITIMATE_ADDRESS() macro. In the i386 port, that's implemented by legitimate_address_p() in "i386.c". Ross Ridge
Re: mingw32 subtle build failure
FX Coudert wrote: > -B/mingw/i386-pc-mingw32/bin/ This looks wrong, it should be "/mingw/mingw32/bin". Putting a copy of as and ld in "/mingw/i386-pc-mingw32/bin" might work around your problem. Ross Ridge
Re: Segment registers support for i386
Remy Saissy wrote: >I've looked for a target specific callback to modify but I've found >nothing, even in the gcc internals info pages. Do you mean I would >have to modify some code outside of the i386 directory ? Or maybe to >add such a callback if it doesn't exist ;) You'ld have to modify code in the main GCC directory, probably a lot of code. Since it's target dependent, you'ld need to implement it using a hook or hooks. >In which file does the tree to RTL conversion code is located ? There are several files that do this jobs. See the internals documentation. >Does it mean that an RTL expression which use reg: force gcc to use a >particular pseudo register ? Pseudo registers aren't real registers. They either get changed to real hard registers, or memory references to stack slots. See the internals documentation for more details. Ross Ridge
Re: TLS on windows
FX Coudert wrote: > Now, for an idea of how much work it represents... perhaps someone >here can tell us? It's not too hard but it requires changing GCC and binutils, plus a bit of library support. In my implementation (more or less finished, but I have had time to test it yet), I did the following: - Used the existing __thread support in the front-end. Silently ignore the ELF TLS models, because Windows only has one model. - Added target specific (cygming) support for __attribute__((thread)) aka __declspec(thread) for MSC compatibility. - Created an legitimize_win32_tls_address() to replace legitimize_tls_address() in i386.c. It outputs RTL like: (set (reg:SI tp) (mem:SI (unspec [(const_int 44)] WIN32_TIB))) (set (reg:SI index) (mem:SI (symbol_ref:SI "__tls_index__"))) (set (reg:SI base) (mem:SI (add:SI (reg:SI tp) (mult:SI (reg:SI index) (const_int 4) (plus:SI (reg:SI base) (const:SI (unspec:SI [(symbol_ref:SI "foo")] SECREL - Handled the WIN32_TIB unspec by outputting "%fs:44" and the SECREL unspec by outputting "foo`SECREL". I couldn't use "[EMAIL PROTECTED]" because "@" is valid in identifiers with PECOFF. - Support .tls sections in PECOFF by creating an i386_pe_select_section() based on the generic ELF version. - Added an -mfiber-safe-tls target specific option that makes the references to the WIN32 TIB non-constant. - Modified gas to handle "foo`SECREL", based on the ELF support for "@" relocations - Fixed some problems with TLS handling in the PECOFF linker script - Created an object file that defines the __tls_used structure (and thus the TLS directory entry) and __tls_index__. Actually, the last one I haven't done yet. I've just been using a linker script to do that, but it should be in a library so the TLS directory entry isn't created if the executable doesn't use TLS. Ross Ridge
Re: [MinGW] Set NATIVE_SYSTEM_HEADER_DIR relative to configured prefix
Ranjit Mathew wrote: > Danny, I'm using the same configure flags that you have used for GCC >3.4.5 MinGW release (*except* for --prefix=/mingw, which is something >like --prefix=/j/mingw/mgw for me), but the GCC I get is not relocatable >at all, while I can put the MinGW GCC 3.4.5 release anywhere on the >filesystem and it still works. :-( The GCC I get from my native MinGW build of the trunk is relocatable: e:\util\mygcc.new\bin\gcc -v -E -o nul -x c x.c Using built-in specs. Target: mingw32 Configured with: ../gcc/configure --prefix=/src/gcc/runtime --target=mingw32 --host=mingw32 --enable-languages=c,c++ --enable-threads=win32 --with-win32-nlsapi=unicode --enable-bootstrap --disable-werror --with-ld=/src/gcc/runtime/bin/ld --with-as=/src/gcc/runtime/bin/as Thread model: win32 gcc version 4.2.0 20060513 (experimental) e:/util/mygcc.new/bin/../libexec/gcc/mingw32/4.2.0/cc1.exe -E -quiet -v -iprefix e:\util\mygcc.new\bin\../lib/gcc/mingw32/4.2.0/ x.c -o nul.exe -mtune=i386 ignoring nonexistent directory "e:/util/mygcc.new/bin/../lib/gcc/mingw32/4.2.0/../../../../mingw32/include" ignoring nonexistent directory "/src/gcc/runtime/include" ignoring nonexistent directory "/src/gcc/runtime/include" ignoring nonexistent directory "/src/gcc/runtime/lib/gcc/mingw32/4.2.0/include" ignoring nonexistent directory "/src/gcc/runtime/mingw32/include" ignoring nonexistent directory "/mingw/include" #include "..." search starts here: #include <...> search starts here: e:/util/mygcc.new/bin/../lib/gcc/mingw32/4.2.0/../../../../include e:/util/mygcc.new/bin/../lib/gcc/mingw32/4.2.0/include End of search list. It picks up the "system include directory" without a problem. What exactly is the error you're getting that indicates that your compiled version of GCC isn't relocatable? Ross Ridge
Re: [MinGW] Set NATIVE_SYSTEM_HEADER_DIR relative to configured prefix
Ross Ridge wrote: >The GCC I get from my native MinGW build of the trunk is relocatable: Hmm... I should have sent that to gcc-patches, sorry. Ross Ridge
Re: TLS on windows
Ross Ridge wrote: > Actually, the last one I haven't done yet. I've just been using a linker > script to do that, but it should be in a library so the TLS directory > entry isn't created if the executable doesn't use TLS. Richard Henderson wrote: > You can also create this in the linker, without a library. > Not too difficult, since you've got to do that to set the > bit in the PE header anyway. Fortunately, the linker already supports setting the TLS directory entry in the PE header if a symbol named "__tls_used" exists. Section relative relocations are also already supported (for DWARF, I think), I just needed to add the syntax to gas. Ross Ridge
Re: Coroutines
Maurizio Vitale wrote: > I'm looking at the very same problem, hoping to get very lightweight > user-level threads for use in discrete event simulation. Dustin Laurence wrote: >Yeah, though even that is more heavyweight than coroutines, so your job >is harder than mine. Hmm? I don't see how the "Lua-style" coroutines you're looking are any lightweight than what Maurizio Vitale is looking for. They're actually more heavyweight because you need to implement some method of returning values to the "coroutine" being yeilded to. Ross Ridge
Re: Coroutines
Ross Ridge wrote: >Hmm? I don't see how the "Lua-style" coroutines you're looking are any >lightweight than what Maurizio Vitale is looking for. They're actually >more heavyweight because you need to implement some method of returning >values to the "coroutine" being yeilded to. Dustin Laurence wrote: >I guess that depends on whether the userspace thread package in question >provides for a return value as pthreads does. Maurizio Vitale clearly wasn't looking for pthreads. > In any case, coroutines don't need a scheduler, even a cooperative one. He also made it clear he wanted schedule his threads himself, just like you want to do. In fact, what he seems to be trying to implement are true symmetric coroutines. Ross Ridge
Re: why are we not using const?
Andrew Pinski wrote: >"Stupid" example where a const argument can change: >tree a; >int f(const tree b) >{ > TREE_CODE(a) = TREE_CODE (b) + 1; > return TREE_CODE (b); >} You're not changing the constant argument "b", just what "b" might point to. I don't think there are any optimizing opportunities for arguments declared as const, as opposed to arguments declared as pointing to const. Ross Ridge
Re: RFC: __cxa_atexit for mingw32
Mark Mitchell writes: >As a MinGW user, I would prefer not to see __cxa_atexit added to MinGW. >I really want MinGW to provide the ability to link to MSVCRT: nothing >more, nothing less. Well, even Microsoft's compiler doesn't just to link MSVCRT.DLL (or it's successors) a certain part of C runtime is implemented as static objects in MSVCRT.LIB. MinGW has to provide equivilent functionality in their static runtime library, or at least what GCC doesn't already provide in it's runtime library. > ... I think it would be better to adopt G++ to use whatever method >Microsoft uses to handle static destructions. I've looked into handling Microsoft's static constructors correctly when linking MSC compiled objects with MinGW and I don't think it's an either or situtation. MinGW can handle both it's own style of construction and Microsoft's at the same time. I didn't look into how Microsoft handles destructors though, because the objects in particular I was concerned about didn't seem to use them. >Ultimately, I would like to see G++ support the Microsoft C++ ABI -- >unless we can convince Microsoft to support the cross-platform C++ ABI. :-) Hmm... I'm not sure which would be easier. btw. regarding Microsoft's patents, Google turned up this link: http://www.codesourcery.com/archives/cxx-abi-dev/msg00097.html That message is from 1999, so I wouldn't be surprised if Microsoft has filed a bunch of new C++ ABI patents since then. Ross Ridge
Re: does gcc support multiple sizes, or not?
Mark Mitchell wrote: >I think you really have to accept that the change you want to make goes >to a relatively fundamental invariant of C++. I don't see how you can call this a realatively fundamental invariant of C++, given how various C++ implementations have supported multiple pointer sizes for much of the history of C++. Perhaps you could argue that Standard C++ made a fundamental change to the language, but I don't think so. The original STL made specific allowances for different memory models and pointer types, and this design, with it's otherwise unnecessary "pointer" and "size_type" types, was incorporated in to the standard. I think the intent of the "(T *)(U *)(T *)x == (T *)x" invariant was only to limit the standard pointer types, not make to non-standard pointer types of differt size fundamentally not C++. (Unlike, say, the fundamental changes the standard made to how templates work...) Ross Ridge
Re: Merging identical functions in GCC
Ian Lance Taylor wrote: >I think Danny has a 75% implementation based on hashing the RTL for a >section and using that to select the COMDAT section signature. I don't think this is a good idea. With different compiler options the same RTL can generate different assembly instructions. Consider the case of compiling the same function multiple times with different names and different CPU architectures selected. You'd actually want the linker to merge the functions that ended up having the same assembly, but not the ones with the same RTL but different assembly. Also, I don't think it's safe if you merge only functions in COMDAT sections. Consider: #include template T foo(T a) { return a; } template T bar(T a) { return a; } int main() { assert((int (*)(int)) foo != (int (*)(int)) bar); } Both foo and bar get put in their own COMDAT section and their RTL and assembly are the same, but it's not safe to merge them. Simply merging identical COMDAT sections would have to be optional and disabled by default as Michael Popov said at the start of this thread. The only way I can see to do it safely would be to emit some sort instruction not to merge a function when the compiler sees that its address is taken. Ross Ridge
Re: Merging identical functions in GCC
Ross Ridge writes: >I don't think this is a good idea. With different compiler options the >same RTL can generate different assembly instructions. Consider the case >of compiling the same function multiple times with different names and >different CPU architectures selected. You'd actually want the linker >to merge the functions that ended up having the same assembly, but not >the ones with the same RTL but different assembly. Daniel Berlin writes: >So basically you are saying if you don't know what you are doing, or >know you don't want to use it, you shouldn't be using it. No, and I can't see how how you've came up with such an abusurd misintepretation of what I said. As I said clearly and explicity, the example I gave was where you'd want to use function merging. >The current hash actually takes into account compiler options as a >starting value for the hash, btw!) Well, then that brings up the other problem I have with this, figuring out exactly which options and which parts of the RTL should be hashed seems to be too error prone. I think this is best done by linker which can much more reliably compare the contents of functions to see if they are the same. Ross Ridge
Re: Merging identical functions in GCC
Ross Ridge writes: >No, and I can't see how how you've came up with such an abusurd >misintepretation of what I said. As I said clearly and explicity, >the example I gave was where you'd want to use function merging. Daniel Berlin writes: >Whatever. Why would you turn on function merging if you are trying to >specifically get the compiler to produce different code for your >functions than it did before? Because I as already said, you want to merge the funtions that happen to be same. You don't want to merge the ones that aren't the same. Sometimes using different compiler options (eg. for CPU architecture) generates different code, sometimes it doesn't. If you could always predict what the exact code the compiler was going generate you'd might as well write your code in assembly. >As an FYI, you already have this situation with linkonce functions. No, linkonce functions get merged because they have same name. >>I think this is best done by linker which >>can much more reliably compare the contents of functions to see if they >>are the same. > >No it can't. It has no idea what a function consists of other than a >bunch of bytes, in pretty much all cases. ... Stupid byte >comparisons of functions generally won't save you anything truly >interesting. Microsoft's implementation has proven that "stupid" byte comparions can generate significant savings. Ross Ridge
Re: Merging identical functions in GCC
Gabriel Dos Reis write: >Not very logn ago I spoke with the VC++ manager about this, and he >said that their implementation currently is not conforming -- but >they are working on it. The issue has to with f and f >required to have difference addresses -- which is violated by their >implementation. Yes, this issue has already been mentioned in this thread and is a problem regardless of how you compare functions to find out if they are the same. The compiler also needs to be able to detect when its safe to merge functions that are identical. Ross Ridge
Re: Merging identical functions in GCC
Ross Ridge writes: >Microsoft's implementation has proven that "stupid" byte comparions can >generate significant savings. Daniel Berlin wrtes: >No they haven't. So Microsoft and everyone who says they've got significant savings using it is lying? >But have fun implementing it in your linker, and still making it safe >if that's what you really want. >I'm not going to do that, and I don't believe it is a good idea. I'm not asking you to do anything. I'm just telling you that I don't think your idea is any good. Ross Ridge
Re: Merging identical functions in GCC
Daniel Berlin writes >Do you really want me to sit here and knock down every single one of >your arguments? Why would you think I would've wanted your "No, it isn't" responses instead? >Your functions you are trying to optimize for multiple cpu >types and compiled with different flags may be output as linkonce >functions. The linker is simply going to pick one, regardless of what >CPU architecture or assembly it generated... No, in the example I gave, the functions have different names. >The fact is that Microsoft's implementation rarely generates >significant savings over that given by linkonce functions, and when >it does, it will in no way compare to anything that does *more* than >stupid byte comparisons will give you. No, "linkonce" function discarding is always done by the Microsoft toolchain and can't be disabled. The reported savings are the result of comparing the results of enabling and disabling identical COMDAT folding. I don't see how your "intelligent" hashing can do significantly better except by merging functions that aren't really identical. >That's nice. It's the only way to do it sanely and correctly in all >cases, without having to teach the linker how to look at code, or to >control the linker (which we don't on some platforms), and output a >side channel explaining what it is allowed to eliminate, at which >point, you might as well do it in the compiler! How does hashing the RTL and using that as the COMDAT label solve this problem? You're telling the linker you know it's safe to merge when you don't know if the function's address is compared in another compilation unit or not. >You can believe what you like about the idea. Until you are willing >to implement something *you* believe will help, or at the least >explain how you forsee it being done safely (which Microsoft >doesn't!), it's very hard to take you seriously As I've already said, it can be made safe by communicating to the linker which functions have had their address taken. Yes, this requires special support from the linker, but then so has linkonce on some platforms. If that special support isn't available you're still left with an unsafe but very useful optimization for applications that don't compare function pointers. Ross Ridge
Re: Merging identical functions in GCC
Daniel Berlin writes >Please go away and stop trolling. I'm not the one who's being rude and abusive. >If your concern is function pointers or global functions, you can >never eliminate any global function, unless your application doesn't >call dlopen, or otherwise load anything dynamically, including through >shared libs. I hope that doesn't include global functions like list::sort() and list::sort(), otherwise your optimization is pretty much useless. If your optimization does merge such functions then you're still left with the problem that their member function pointers might be compared in another compilation unit. Ross Ridge
Re: __builtin_expect for indirect function calls
Mark Mitchell writes: > What do people think? Do we have the leeway to change this? If it were just cases where using __builtin_expect is pointless that would break, like function overloading and sizeof then I don't think it would be a problem. However, it would change behaviour when C++ conversion operators are used and I can see these being legitimately used with __builtin_expect. Something like: struct C { operator long(); }; int foo() { if (__builtin_expect(C(), 0)) return 1; return 2; } If cases like these are rare enough it's probably an acceptable change if they give an error because the argument types don't match. Ross Ridge
RE: Memory leaks in compiler
Diego Novillo wrote: >I agree. Freeing memory right before we exit is a waste of time. Dave Korn writes: > So, no gcc without an MMU and virtual memory platform ever again? >Shame, it used to run on Amigas. I don't know if GCC ever freed all of its memory before exiting. An operating system doesn't need an MMU or virtual memory in order to free all the memory used by a process when it exits. MS-DOS did this, and I assume AmigaOS did as well. Ross Ridge
Re: [m32c] type precedence rules and pointer signs
DJ Delorie writes: >extern int U(); >void *ra; ... > foo((ra + U()) - 1) ... >1. What are the language rules controlling this expression, and do they >have any say about signed vs unsigned wrt the int->pointer promotion? There is no integer to pointer promotion. You're adding an integer to a pointer and then subtracting an integer from the resulting pointer value. If U() returns zero then the pointer passed to foo() should point to the element before the one that ra points to. Well, it should if ra actually had a type that Standard C permitted using pointer arithmetic on. Ross Ridge
Re: [PATCH][4.3] Deprecate -ftrapv
Mark Mitchell writes: > However, I don't think doing all of that work is required to make this > feature useful to people. You seem to be focusing on making -ftrapv > capture 100% of overflows, so that people could depend on their programs > crashing if they had an overflow. That might be useful in two > circumstances: (a) getting bugs out (though for an example like the one > above, I can well imagine many people not considering that a bug worth > fixing), and (b) in safety-critical situations where it's better to die > than do the wrong thing. Richard Kenner writed: > You forgot the third: if Ada is to use it rather than its own approach, > it must indeed be 100% reliable. Actually, that's a different issue than catching 100% of overflows, which apparently Ada doesn't require. > Robert is correct that if it's sufficiently more efficient than Ada's > approach, it can be made the default, so that by default range-checking > is on in Ada, but not in a 100% reliable fashion. On the issue of performance, out of curiosity I tried playing around with the IA-32 INTO instruction. I noticed two things, the first was that instruction wasn't supported in 64-bit mode, and the second was that it on the Linux I was using, it generated SIGSEGV signal that was indistinguishable from any other SIGSEGV. If Ada needs to be able to catch and distinguish overflow exceptions, this and possibile other cases of missing operating support might make processor specific overlow support detrimental. Ross Ridge
Re: [PATCH][4.3] Deprecate -ftrapv
Robert Dewar write: >Usually there are ways of telling what is going on at a sufficiently >low level, but in any case, code using the conditional jump instruction >(jo/jno) is hugely better than what we do now (and it is often faster >to usea jo than into). My point is that using INTO or some other processor's overlow mechanism that requires operating system support wouldn't necessarily be better for Ada, even it performs better (or uses less space) than the alternatives. Having the program crash with a vague exception would meet the requirements of -ftrapv, but not Ada. Ross Ridge