Re: Link tests not allowed
Douglas B Rupp wrote: checking for library containing strerror... configure: error: Link tests are not allowed after GCC_NO_EXECUTABLES. You get this error if a link command fails while trying to configure the target libiberty. So the question is why did the link fail? You need to look at the target libiberty config.log file to find out why. The interesting part will be immediately after the -V test. There are many different ways that the link may have failed, and there are many different ways to fix this, depending on what exactly the problem is. One way to solve this is to use newlib at the target library, in which case libiberty uses a built-in list of functions instead of doing link tests. This however is clearly the wrong solution in your case, as AIX of course uses the AIX C library, not newlib. I tried to reproduce this, and found the problem is that aix4.3 and up require additional nm/ar options. NM_FOR_TARGET in the toplevel Makefile includes the -B and -X32_64 options. Inside the gcc configure, we do "test -x $NM_FOR_TARGET" and this fails because NM_FOR_TARGET expands to more than a program name, and the shell test -x command does not handle this case. We need to extract out the program name for this test. This should be easy enough to do. There is an example showing how to do this a few lines below. Want to try writing a patch? Or alternatively submitting a bug report so we can track this? Admittedly, the configure error printed is a bit misleading. It used to make sense when it was first written, but a lot of stuff has changed since then, and the error message never got updated. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Link tests not allowed
Douglas B Rupp wrote: I'm happy to try writing a patch, but my version of gcc/configure doesn't look like what you described. I forgot to svn update my tree. It was older than I thought. I'll have to update it and try again. Did you try looking in the config.log file as I described? What is the linker error message there? -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Link tests not allowed
Daniel Jacobowitz wrote: I'm not at all sure how the nm failure ends up leading to this problem, but I'll take your word for that part. I should have explained that part. I got an error from nm as invoked by collect. nm failed because gcc/nm in the build tree is a shell script that does 'exec "$@"' instead of the cross nm in my install tree. gcc/nm was created incorrectly because a "test -x $NM_FOR_TARGET" command in the configure script failed. This test failed because NM_FOR_TARGET is defined as "/.../bin/nm -B -X32_64" and the shell test -x command does not work with that. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Link tests not allowed
Douglas B Rupp wrote: I'm happy to try writing a patch, but my version of gcc/configure doesn't look like what you described. I updated my tree, and configure still looks the same, then I realized that you are using a branch. My mistake. So the problem I described is still there on mainline. gcc-4.1.2 may have a different problem. I'll have to try to reproduce it again with the right tree. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Link tests not allowed
Douglas B Rupp wrote: I'm happy to try writing a patch, but my version of gcc/configure doesn't look like what you described. I tried a build with the gcc-4.1.x branch, and gcc/nm is computed correctly, so the problem I described on mainline does not exist here. Unfortunately, I wasn't able to produce a problem as I don't have a copy of ppc-aix to use for the sysroot, and I don't see any configure error before the build fails because the sysroot is missing. Also, I don't have your complete configure command, or your config.log file, so I can not do anything further here without more info. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Link tests not allowed
On Mon, 2007-01-01 at 18:19 -0800, Douglas B Rupp wrote: > Would you like the complete config.log by private email? Sure. > config.log fragment This is the wrong part of the config.log fragment. The interesting part is nearer the top, immediately after the command using the -V option. The use of -V probably fails which is OK as this is only for information purposes, but the next one is the GCC_NO_EXECUTABLES test, and that one should have worked. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Fwd: Re: gcc 4.1.1 for mcore
Alexander Grobman wrote: /tmp/ccvk5vjH.s:38: Error: operand must be absolute in range 1..32, not 53 Some of the embedded target ports won't work if compiled on 64-bit hosts. mcore-elf seems to be one of them. This problem sometimes shows up as a gcc error and sometimes shows up as a binutils error, and sometimes both. Recompile as a 32-bit application (-m32), or build on a 32-bit linux host. Or file a bug report so this can be fixed. The important bit of info here isn't that the host processor is x86_64, but rather that you are running the 64-bit x86_64-linux OS on it. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Performance of gcc 4.1 vs gcc 3.4
ying lcs wrote: Can you please tell me if there is any performance gain in the output program if i switch from gcc 3.4 to gcc 4.1 on Red hat Enterprise linux 4? Performance varies greatly from one application to the next. The only way to tell if you will see a performance gain with your application by changing gcc versions is to try it yourself and see what happens. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Creating a variable declaration of custom type.
Ferad Zyulkyarov wrote: type_id = get_identifier("MyType"); type_node = make_node(POINTER_TYPE); TYPE_NAME(type_node) = type_id; var_decl = build(VAR_DECL, get_identifier("t"), type_node); Best way to figure this out is to write a simple 5 line testcase that defines a structure type and also defines a pointer to that type, and then step through gcc to see what it does. Try putting breakpoints in finish_struct and build_pointer_type. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Any hints on this problem? Thanks!
?? wrote: Now, my question becomes clear. How to make my inserted function call not affect the orginal state of program? Try looking at a similar feature. One such similar feature is the mcount calls emitted for profiling. The various solutions for mcount include 1) saving lots of registers before the call, and restoring lots of registers after the call. This has a high cost which may not work in your case. 2) Writing mcount in assembly language, so that you can avoid clobbering any registers. Another possible solution is to use special compiler options when compiling the function. For instance -fcall-saved-r14 will tell gcc that r14 must be saved/restored in the prologue/epilogue when used. If you split your instrumentation function into a separate file, and compile with special options, this might work. You will need to use such an option for every normal call clobbered register. There are quite a few of them. Another solution is to add the instrumentation earlier, and use expand_call. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Any hints on this problem? Thanks!
On Sat, 2007-02-10 at 07:45 +0800, 吴曦 wrote: > Thanks for your hints. Is that means doing intrumentation at the "RTL > expand" level? expand_call is a function in the calls.c file. It knows how to do function calls correctly. If you use this, then registers will be saved and restored correctly. > However, I have tried the following method, add a > defined_expand in ia64.md, the template used in define_expand is the > same as the one which will emit a ld instruction, just like this one: A define_expand is used only for creating RTL. This define_expand will be used only if someplace else has "gen_gift_load_symptr_low (...);". A define_expand is not used for matching instructions. So if you have a define_expand emitting some RTL, then someplace else there must be a define_insn that matches it. You will need to learn a lot more about gcc internals to get this working. It is probably simpler to just write your instrumentation function in assembly code. Or maybe compile it to assembly, and then fix it by hand. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: how to dump tree in pt.c:tsubst?
Larry Evans wrote: How does one dump the trees in pt.c:tsubst in some hunan readable cp_dump_tree(&di, args); cp_dump_tree is a hook for printing C++ specific trees. Try dump_node in tree-dump.c instead. Or one of the other functions in this file. I'm not sure if you can call dump_node directly. There are also functions in print-tree.c which produce a different style of output. The entry point here is debug_tree. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Question about register allocation (GCC 4.1)
Thomas Bernard wrote: So basically this is known at compile-time. Such as "dynamic" register allocator. Is it that possible and what are the implications on the register allocator ??? The IA-64 case is much simpler. We have variable sized overlapping register windows. GCC already has register window support for sparc and i960 (since obsoleted). The dynamic part is hard though. I didn't try to solve that. I have at most 96 locals, 8 inputs, and 8 outputs. The port is defined to have 80 locals, 8 inputs, and 8 outputs, i.e. the size of the input/output sets is fixed at their maximums. After register allocation, I look through the input/output register sets to see how many were actually used, and I just allocate that many. The only down side is that if some inputs and outputs are unused, then I can't add them to the local set. However, given that I have 80 of them, it is very rare to run out. Also, since input/output registers can be used for other stuff besides input/output arguments, it is even rarer to run out of locals and still have unused input/output regs left. So I don't worry about it. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: "Installing GCC" documentation: Why a nonstandard title page?
Brooks Moses wrote: The install.texi manual has the following bit of code for the title page: Looking at the svn history, I see that this titlepage line is present in the initial checkin, and the initial checkin says * doc/install.texi: New file. Converted to texinfo from the HTML documentation in wwwdocs/htdocs/install. This was 2001-05-11. You could try looking at the mailing lists for more info, but probably this is all a side-effect of the conversion from HTML to texinfo, and maybe also the desire for the html output to look good. Probably many more people are viewing the output on our web site than via the info program. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Question about source-to-source compilation
Thomas Bernard wrote: framework. For instance, let's say the input language is C and the output language is C annotated with pragmas which are the results of some code analysis (done at middle-end level). It is possible to write a backend that emits C. Sun has one for instance. However, at present, it is not possible to add such a backend into the FSF GCC sources, because this would allow people to subvert the GPL, and FSF policy does not allow this. The Sun backend was already submitted once and rejected for that reason. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Incorrect code generation while passing address of char parameter
Shekhar Divekar wrote: (insn 5 4 6 0x0 (set (reg/v:SI 71) (ashiftrt:SI (reg/v:SI 71) (const_int 24 [0x18]))) -1 (nil) (nil)) This looks suspect. You shouldn't be using the same input and output pseudo regs here. You should instead generate a temporary for the output of the left shift, and use that as the input of the right shift. That should at least help, since it means one less instruction that needs to be modified by put_var_into_stack. There may also be other things wrong. You would try looking at what other ports do. Your port is not the only RISC like port in gcc-3.3.x. So start building other random ports, and feeding in your testcase, and look at the RTL that they generate, and figure out why yours is different. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Quick FUNCTION_MODE doco query
Dave Korn wrote: Was this description perhaps written in pre-RISC days? Yes. You can find identical text in the gcc-1.42 documentation, when almost every port was a CISC. The docs in rtl.texi for the call expression is a bit clearer about the intent here for FUNCTION_MODE. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Error in checking compat.exp
Revital1 Eres wrote: ERROR: tcl error sourcing /home/eres/mve_mainline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/compat.exp. ERROR: couldn't open "/home/eres/mve_xline_zero_12_3/gcc/gcc/testsuite/g++.dg/compat/abi/bitfield1_main.C": Note that mainline got changed to xline. Also note that the directory has files bitfield_main.C, bitfield_x.C, and bitfield_y.C. So it looks like there is a tcl script somewhere to replace "main" with "x", which fails if the directory path contains "main" anywhere in it other than in the filename at the end. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Compiling without the use of static registers in IA-64
[EMAIL PROTECTED] wrote: I was wondering if anyone knew how I could modify gcc to not use static general purpose registers on an IA-64 machine? Besides the -ffixed-reg option Vlad mentioned, there is also a documented IA-64 specific option for this. See the docs. No reason why this option needs to be IA-64 specific though; it was just implemented that way for convenience before the IA-64 port was contributed to the FSF. You may have trouble with 64-bit constants if you disable r2/r3. There are only four regs that work with the addl instruction, and you have just disallowed all of them. You might need other changes to make this work. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: RTL representations and virtual addresses
Sunzir Deepur wrote: I use -da to dump RTL files of the passes. Is there a way to add the virtual addresses of each directive ? You can't compute addresses reliably until after reload. Before reload, the size of each rtl insn is unknown. My wish is to generate a CFG in which I would know, for each basic block and RTL command, what is the virtual address this command will be at in the binary.. You can already find much of this info in the gcov profiling files. See profile.c and gcov.c and other related files. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Constrain not satisfied - floating point insns.
Rohit Arul Raj wrote: (define_insn "movsf_store" [(set (match_operand:SF 0 "memory_operand" "=m") (match_operand:SF 1 "float_reg""f"))] You must have a single movsf define_insn that accepts all alternatives so that reload will work. You can't have separate define_insns for movsf and movsf_store. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: error: unable to find a register to spill in class 'FP_REGS'
Markus Franke wrote: Does anybody have an idea what could be wrong in the machine description or to where start finding the error? Compile with -da, and starting looking at the RTL dumps, mainly the greg and lreg ones. The greg one will have a section listing all of the reloads generated, find the list of reloads generated for this insn 45. lreg will have info about register class preferencing. It will tell you what register class the compiler wants to use for this insn. The fact that this insn doesn't do FP isn't important. What is important is how the pseudo-regs are used. If the pseudo-reg 92 is used in 10 insns, and 8 of them are FP insns and 2 are integer move insns, then the register allocator will prefer an FP reg, since that should give the best overall result, as only 2 insns will need reloads. If it used an integer reg, then 8 insns would need reloads. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Constrain not satisfied - floating point insns.
Dave Korn wrote: But it is ok to use a define_expand (that accepts all alternatives) for movsf and use that to generate one of several movsf_ insns, isn't it? No. You have to have a single movsf insn that accepts all constraints. reload knows that it can fix practically anything by emitting a move insn to move an operand into a reg or mem. However, you can't fix a move insn by emitting yet another move insn. That doesn't actually fix anything. That just moves the same problem to the new move insn. So the rules are different for move insns. You have to have a single pattern that accepts all alternatives that might be generated by reload. If you want to be pedantic, you can actually have multiple movsf patterns if you have operands that can't be generated by reload. For instance if you have EXTRA_CONSTRAINTS R, S, and T, then you could have 3 patterns one which accepts R and all usual operand combinations, one which accepts S and all usual operand combinations, and one which accepts T and all usual operand combinations. Doing that would be pointless though. You would be much better off just having the one pattern that accepts R, S, T and the usual operand combinations. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: What to do when constraint doesn't match
Mohamed Shafi wrote: I have a define_expand with the pattern name mov and a define_insn mov_store I also have a pattern for register move from 'a' to 'b', call it mova2b. You have to have a single mov pattern that accepts all valid constraint combinations. You can't use two separate patterns like this. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Constrain not satisfied - floating point insns.
Dave Korn wrote: But it is ok to use a define_expand (that accepts all alternatives) for movsf and use that to generate one of several movsf_ insns, isn't it? Reload doesn't use the move define_expands. It can't. A define_expand is allowed to do stuff like generate new RTL that would completely mess up what reload is trying to do. So reload has to generate move insns directly, and hope they match. And since emitting a move insn to fix another move insn makes no sense, existing move insns are fixed in place, and not rerecognized after they are fixed. Hence, you need a single move insn pattern that accepts all of the usual operand combinations. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Constrain not satisfied - floating point insns.
Rohit Arul Raj wrote: But for moving an immediate value, compiler should use a data register but it is using a floating point register. Sometimes it is impossible to avoid using an FP reg where we would prefer to have a data register. This is why reload exists, to fix things that don't match their constraints after register allocation. Still i get an ICE for constrain not satisfied. It isn't clear what the problem is. Reload should have either forced the constant into memory, or else added a move insn so that we could load it into a data reg and then move the data reg to an FP reg. Try looking at the RTL dumps to see what reloads were generated for this insn. If the constraint failure happened after reload was done, what was the pre- and post- reload RTL for this insn? Did a following pass like post-reload cse accidentally break the insn? If the constraint failure is during reload, then you may have to spend a little time stepping through reload to see what went wrong. The interesting bit would be in find_reloads, you can set a conditional breakpoint on the insn number to stop here when it processes this insn. This function might be called multiple times for the same insn, as we have to iterate until all insns are fixed. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: error: unable to find a register to spill in class 'FP_REGS'
Markus Franke wrote: That means the compiler has to reload the pseudo registers 92 and 93 for this instruction, right? First we do register allocation. Then, after register allocation, if the chosen hard registers don't match the constraints, then we use reload to fix it. The relevant data for instruction 45 in .greg looks like that: Insn 45 in the greg dump looks nothing like the insn 45 in the expand dump, which means you are looking at the wrong insn here. But it was insn 45 in the original mail. Did you change the testcase perhaps? Or use different optimization options? The info we are looking for should look something like this Reloads for insn # 13 Reload 0: reload_out (SI) = (reg:SI 97) R1_REGS, RELOAD_FOR_OUTPUT (opnum = 0) reload_out_reg: (reg:SI 97) reload_reg_rtx: (reg:SI 1 %r1) ;; Register 92 in 9. ;; Register 93 in 10. This tells us that pseudo 92 was allocated to hard reg 9, and pseudo 93 was allocated to hard reg 10. I didn't see reg class preferencing info for these regs, but maybe it is in one of the other dump files. The earlier message has rtl claiming that pseudo 92 got allocated to register 1 (r1). I seem to be getting inconsistent information here. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: regclass oddity?
Dave Korn wrote: When regclass determines that placing an operand into either one of several register classes would have the same cost, it picks the numerically highest one in enum reg_class ordering. One possible solution is to give the special registers a higher REGISTER_MOVE_COST to discourage their use. Another solution is to use ! and ? in the constraints to increase the cost of alternatives that are inconvenient. There is also CLASS_LIKELY_SPILLED_P which is supposed to discourage local-alloc from using registers in single-reg classes. At least there is code for this in gcc-2.95.3. It looks like things have changed in this area since then. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: regclass oddity?
Dave Korn wrote: Because my movsi3 pattern that allows both GENERAL_REGS through an 'r' constraint, and MPD_REG and MPRL_REG through custom constraint letters ('a' and 'd'), does that mean I need to define a union class or I'm actually doing something wrong? Adding the union classes would certainly help. The mips port for instance has union classes for hi, lo, the general regs, along with some others. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Constrain not satisfied - floating point insns.
Richard Sandiford wrote: That isn't unconditionally true, is it? reload1.c:gen_reload uses gen_move_insn, which is just a start_sequence/end_sequence wrapper for emit_move_insn_1, which in turn calls the move expanders. Yes, you are right. I over simplified. We can use gen_move_insn when we expect that this will result in an insn that won't require any further reloading. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Trouble understanding reload dump file format..
Dave Korn wrote: In particular, I really don't understand what a RELOAD_FOR_INPUT_ADDRESS means when all the operands are regs, or why there should be three reloads for the same operand when it's just a clobber scratch. Is there something special about how reload handles clobber and match_scratch? Reload types are used for a number of purposes. One of them is the ordering used when emitting the actual reload fixup insns. A fixup for an input reload must come before the insn. A fixup for an output reload must come after the insn. A fixup for an address used by an input reload must come before the input reloads. A fixup for an address used by an output reload must come after the insn, but before the output reloads. Etc. Careful ordering allows us to know the lifetimes of every reload, and hence allows us to share reload registers between reload types that don't overlap, thus reducing the total number of spill regs required. In this case, we have an input reload that requires secondary reloads. Obviously, the secondary reloads must be emitted before the input reload, so we just use the "input address" reload type for convenience, even though there is no actual address involved here. That gets the reloads emitted in the right place. As usual, I'm over simplifying. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Bitfield conversion bug in 4.2?
Eric Lemings wrote: test.cpp: In function 'int main()': test02.cpp:6: error: could not convert 's.S::v' to 'bool' test02.cpp:6: error: in arguument to unary ! As per my gcc-bugs message. I suggest this untested patch. -- Jim Wilson, GNU Tools Support, http://www.specifix.com 2007-03-19 Jim Wilson * call.c (standard_conversion): Set fcode after call to strip_top_quals. Index: call.c === --- call.c (revision 123071) +++ call.c (working copy) @@ -632,7 +632,10 @@ tree bitfield_type; bitfield_type = is_bitfield_expr_with_lowered_type (expr); if (bitfield_type) - from = strip_top_quals (bitfield_type); + { + from = strip_top_quals (bitfield_type); + fcode = TREE_CODE (from); + } } conv = build_conv (ck_rvalue, from, conv); }
Re: why not use setjmp/longjmp within gcc?
Basile STARYNKEVITCH wrote: It is quite standard since a long time, and I don't understand why it should be avoided (as some old Changelog suggest). Which old ChangeLog? What exactly does it say? We can't help you if we don't know what you are talking about. There used to be setjmp calls in cse.c and fold-const.c. This was in the FP constant folding code. We would call setjmp, then install a signal handler that called longjmp, in case an FP instruction generated a signal, so we could recover gracefully and continue compiling. But nowadays we use software emulated FP for folding operations, even when native, so we no longer have to worry about getting FP signals here, and the setjmp calls are gone. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Adding Profiling support - GCC 4.1.1
Rohit Arul Raj wrote: 1. The function mcount: While building with native gcc, the mcount function is defined in glibc. Is the same mcount function available in newlib? or is it that we have to define it in our back-end as SPARC does (gmon-sol2.c). Did you try looking at newlib? Try something like this find . -type f | xargs grep mcount That will show you all of the mcount support in newlib/libgloss. sparc-solaris is a special case. Early versions of Solaris shipped without the necessary support files. (Maybe it still does? I don't know, and don't care to check.) I think that there were part of the add-on extra-cost compiler. This meant that people using gcc only were not able to use profiling unless gcc provided the mcount library. Otherwise it never would have been put here. mcount belongs in the C library. 2. Is it possible to reuse the existing mcount definition or is it customized for every backend? It must be customized for every backend. 3. Any other existing back-ends that support profiling. Pretty much all targets do, at least ones for operating systems. It is much harder to make mcount work for an embedded target with no file system. If you want to learn how mcount works, just pick any existing target with mcount support, and study it. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: -Wextra and enumerator/non-enumerator in conditional expressions
Ching, Jimen (US SSA) wrote: According to the manual, I should be getting a warning, but I don't. Did I misunderstand the manual? A conditional expression, as per the ISO C/C++ standards, is an expression of the form (A ? B : C). There is no conditional expression in your testcase. Also, in order to see the warning, you have to use a type that the enum does not easily convert to. Something like return (1 ? BAR : 1L); works. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: DWARF Formats - GCC 4.1.1
Rohit Arul Raj wrote: Can any one suggest a right place to find the differences between the DWARF formats in gcc compiler versions 3.4.6 and 4.1.1? They both follow the standard, so there is no major change here. There are of course changes in the details. To find the details, you could compare the dwarf2out.c files. You could check the ChangeLog, svn log, and gcc-patches mailing list to see individual patches. You could compile with -S -dA and look at the assembly language output. You could dump the debug info from object files with readelf and compare them. Etc. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Possible bug in preprocessor
JoseD wrote: @James What do you mean by 16.3.3/3? GCC's version ? This is a reference to the ISO C standard. Still don't see what the problem whith 2 tokens is... The problem is the fact that they are 2 tokens. You can do a ## b to create ab, but you can not do a ## ( to create a( because a( is two tokens. See the ISO C standard. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: adding dependence from prefetch to load
George Caragea wrote: So my initial question remains: is there any way to tell the scheduler not to place the prefetch instruction after the actual read? You can try changing sched_analyze_2 in sched-deps.c to handle PREFETCH specially. You could perhaps handle it similarly to how PRE_DEC is already handled, except that you don't have a MEM here which is an unfortunate complication. You probably have to create a MEM RTL in order to pass it to sched_analyze_1. Or you could try changing the representation of your prefetch insn rtl. If you add an unspec_volatile to the pattern, you should get the dependency you want. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: peephole patterns are not matching
Mohamed Shafi wrote: even i wrote define_peephole2 which is similar to the above. But the above patterns are not matched at all. But i can find these patterns in the rtl dumps. Run cc1 under gdb. Put a breakpoint in the peephole function. Step through the code to see what is wrong. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: GCC mini-summit - benchmarks
Kenneth Hoste wrote: I'm not sure what 'tests' mean here... Are test cases being extracted from the SPEC CPU2006 sources? Or are you refering to the validity tests of the SPEC framework itself (to check whether the output generated by some binary conforms with their reference output)? The claim is that SPEC CPU2006 has source code bugs that cause it to fail when compiled by gcc. We weren't given a specific list of problem. There are known problems with older SPEC benchmarks though. For instance, vortex fails on some targets unless compiled with -fno-strict-aliasing. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Problem building gcc on Cygwin
Tom Dickens wrote: ../gcc/configure -enable-languages=c,c++,fortran. make[1]: Leaving directory `/cygdrive/c/gcc-4.1.2/obj' You ran the wrong configure script. You must always run the toplevel configure script, not the one inside the gcc directory. So instead of doing cd gcc-4.1.2 mkdir obj cd obj ../gcc/configure which will fail. You should instead do mkdir obj cd obj ../gcc-4.1.2/configure which will work. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: A question on gimplifier
H. J. Lu wrote: __builtin_ia32_vec_set_v2di will be expanded to [(set (match_operand:V2DI 0 "register_operand" "=x") (vec_merge:V2DI (vec_duplicate:V2DI (match_operand:DI 2 "nonimmediate_operand" "rm")) (match_operand:V2DI 1 "register_operand" "0") (match_operand:SI 3 "const_pow2_1_to_2_operand" "n")))] Named rtl expanders aren't allowed to clobber their inputs. You will need to generate a pseudo-reg temp in the expander, copy the first input to the temp, and then use the temp as the output/input argument. There are probably lots of existing examples in the i386 *.md files to look at. See for instance the reduc_splus_v4sf pattern in the sse.md file. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Explicit NOPs for a VLIW Machine
Balaji V. Iyer wrote: I am porting GCC 4.0.0 to a proprietary VLIW machine, and I want to insert NOPs explicitly wherever there is an Output/Flow/Anti dependencies. I am currently doing this insertion in the machine dependent reorganization phase. Is there a way to do this in machine description file (or during scheduling phase) itself (or a better way to do this)? You could look at what the IA-64 port does. We delay the second scheduling pass until the mach dep reorg pass, and then use scheduler hooks to insert the padding nops we need. We also do bundling at the same time. This gets pretty complicated for IA-64 because of the bundling issues, but it is doable. Otherwise, no, there is no simple way to do this other than what you are already doing. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: A New Architecture
[EMAIL PROTECTED] wrote: The GCC testing framework is so complicated to understand. It is built on top of the DejaGnu which are built on top of Expect which are built on top of Tcl and Autoconf which .. You don't need autoconf etc to run dejagnu. These are needed only if you want to do development work on dejagnu itself. You only need expect and tcl to run dejagnu. Just start with one of the existing simulator files in the baseboards directory, like mips-sim.exp, copy it to your target name, e.g. score-sim.exp, modify as appropriate, and then run the testsuite via make -k check RUNTESTFLAGS="--target_board=score-sim.exp" and hopefully it should work. You can add -v options to RNUTESTFLAGS to help debug testsuite issues. The more -v options you add, the more debugging output you get. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: cross build gcc 4.0/4.1 fail
Cauchy Song wrote: i686-pc-mingw32-gcc -O2 ... /opt/cvs_work/gcc40-branch/gcc/libgcc2.c -o libgcc/./_muldi3.o In file included from /opt/cvs_work/gcc40-branch/gcc/libgcc2.c:56: /opt/cvs_work/gcc40-branch/gcc/libgcc2.h:34: warning: ignoring #pragma GCC visibility /opt/cvs_work/gcc40-branch/gcc/libgcc2.h:117: error: no data type for mode `SC' Perhaps you are using the wrong cross-compiler for this canandian cross build? You can check by adding -v to the above compiler command. When building a canadian cross, you must use a cross compiler of the same gcc version. In this case, the i686-pc-mingw32-gcc must be built from the same gcc-4.0-branch sources you are using for the canadian cross build. You should consider using --prefix options so that each gcc version gets installed in a different place. This will help avoid confusion about gcc versions. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: GCC Installation Problem - Please help....
Renju Anand wrote: I'm facing compilation problem during GCC( 3.2.2 ) installation in LynxOS under x86 platform. You didn't say what kind of problems you ran into. Without details, we can't really help. I can say that, historically, the LynxOS support in FSF gcc has been poor, and unlikely to be usuable without some work. We stopped supporting gcc-3.2.x a long time ago, so we aren't going to do anything to fix problems with it. If you are using a gcc from someone else that has been modified to support LynxOS, then perhaps it should work, but you would have to talk to the vendor. We don't support gcc versions released by others. On the plus side, there was some LynxOS work done last year. 2004-08-05 Adam Nemet <[EMAIL PROTECTED]> * config.gcc (case i[34567]86-*-lynxos*): Update to LynxOS 4.0. (case rs6000-*-lynxos*): Rename it to powerpc-*-lynxos*. Update to LynxOS 4.0. These patches are in the gcc-4.0.x series. So if you have LynxOS 4.0, then gcc-4.0.2 should work for you. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Error at compiling linux-2.6.13.4
Antonín Kolísek wrote: compiling linux-2.6.13.4 (vanilla) on my PC (linux-2.6.13.1, gcc-3.3.6, glibc-2.3.5). gcc-3.3.6 isn't being maintained anymore, which means we aren't going to fix this. You will need to use a more recent gcc version, or find a way to workaround the problem, for instance by using different optimization options. By the way, gcc bugs should be reported into bugzilla, rather than mailed to this list. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: [gfortran] Change language name from "f95" to "fortran"
FX Coudert wrote: Attached patch changes the name of the language in --enable-languages from "f95" to "fortran", and in a few other places. There are still lots of places which are refered to as f95 (such as f951 ;-), but they are all internal uses. This happened a month ago. It just occured to me today that it is a bit odd that I can do --enable-languages=fortran, but I can't do -x fortran. The -x option only accepts f77 and f95. Shouldn't -x fortran work also? -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: CVS access to the uberbaum tree
Peter Barada wrote: Does the uberbaum tree exist on savanna, or is it only on sources.redhat.com? If so, what is the procedure for accessing it? I would not recommend use of uberbaum. There are some old-time ex-Cygnus hackers that use it, because it gives an environment familiar to the one we used inside Cygnus. For everyone else, it probably causes more problems than it solves. uberbaum exists only on sourceware.org. It is a symlink tree that tries to make two CVS repositories look like one. Because it is a fake tree, you can not use it for checking in patches. You can only use it for checkouts. Because few people use it, when it breaks, it may be a while before it is fixed. Personally, I find it easier to just check out the different trees and then create a local symlink tree. Note that since gcc is in the process of moving to subversion, uberbaum will likely stop working in a couple of weeks. It is probably pointless to start using it now. I don't recall how to use uberbaum, though I do recall that the instructions are buried in one of the gcc mailing list archives. If you search, you should be able to find them. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Is gcc optimized for thread level parallelism?
x z wrote: Is gcc optimized for thread level parallelism, in view of the recent development of SMT and multicore architectures? No, but we are working on OpenMP support, which is somewhat related. This isn't automatic parallelization; it requires programmer instrumentation via pragmas. This is probably more directed at multiprocessor machines than threads, but it is a start in the right direction. See http://gcc.gnu.org/projects/gomp/ This is still in early stages of implementation. Don't expect anything to work yet. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Deinitialization of globals
Piotr Wyderski wrote: Why isn't c destroyed at the very end? Is it a bug or a correct behaviour? It is a bug. Where the bug lies depends on lots of info you left out, such as the gcc version, the binutils version, and the target. See http://gcc.gnu.org/bugs.html for info on how to submit bug reports. My first guess would be a linker script or binutils problem. This testcase works as expected on x86 Fedora Core 4 by the way. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: A question about RTL output
Eric Fisher wrote: This is a strange problem. Why an operantion that should be a 'xorsi3' format, yet it comes out with a 'scond' format. Probably because it was optimized. If you want a better answer, you have to give us more info about what happened, such as a C testcase, and RTL dumps. But it is probably better if you look at this yourself. Generate debugging dumps, -da -fdump-tree-all, and then start looking. Presumably an XOR was generated at first, and then got optimized to an scond at some point. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: MIPS TLS relocation assembly code invalid from GCC-4.1...
Steven J. Hill wrote: GCC guts, but could not figure out why I get the symbolic register representations in the glibc compiled code and not in my stuff. Can Those aren't symbolic registers. Those are variable names. Try looking at the input file tst-tls10.c, and notice that it has variable names a1, a2, and a3. So somehow, in your output, the variable name a1 got replaced with the register name $5, which won't work. It is a very bad idea to try to use register names that are indistinguishable from variable names. One of them must have a prefix and/or postfix to avoid ambiguity. Maybe you have a macro somewhere that is trying to define a1 to $5? This should only be an issue if you are trying to preprocess .s output though. Or maybe you are using a compiler option to change register names? -mrname was removed in gcc-4.0, and I'm skeptical that this could cause the failure that you are seeing. Or maybe there is just a bug in the mips uclib tls support. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Question on Dwarf2 unwind info and optimized code
Christophe LYON wrote: I have been look at the Dwarf2 frame info generated by GCC, and how it works. From what I can see, only the register saves are recorded, and not the restores. Why? The frame info is primarily used for C++ EH stack unwinding. Since you can't throw a C++ exception in an epilogue, epilogue frame info isn't needed for this, and was never implemented for most targets. Which is a shame. There is a PR for this, PR 18749, for the x86-64 target. The lack of epilogue unwind info shows up if you run the libunwind testsuite. Otherwise, it is really hard to find an example where the missing unwind info is a problem. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: semantics of null lang_hooks.callgraph.expand_function?
Gary Funck wrote: While working with GCC's language hooks, we found that certain places in GCC test for a null value of lang_hooks.callgraph.expand_function, but cgraph_expand_function() calls the hook directly: When cgraph was first added, it was optional, and could be disabled if -fno-unit-at-a-time was used, or if the language front-end did not support cgraph. For a while, our intentions have been to make this mandatory, and eliminate the -fno-unit-at-a-time option. It appears that we have already reached the point where front end support for cgraph is mandatory, as the code no longer works when callgraph.expand_function is NULL. This means all of the checks for NULL are now obsolete and can be removed. The -fno-unit-at-a-time options still exists meanwhile, but will eventually be dropped. It looks like gcc-3.4 supports a NULL callgraph.expand_function hook, and gcc-4.0 and later do not, so I'd guess this transition happened when tree-ssa got merged in. Or maybe it was enabled by the tree-ssa work. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: semantics of null lang_hooks.callgraph.expand_function?
Dueway Qi wrote: I have found another similar case. lang_hooks.callgraph.analyze_expr in gcc/gcc/cgraphunit.c 490 if (lang_hooks.callgraph.analyze_expr) 491 return lang_hooks.callgraph.analyze_expr (tp, walk_subtrees, 492 data); but in another part of this file 517 if ((unsigned int) TREE_CODE (t) >= LAST_AND_UNUSED_TREE_CODE) 518 return lang_hooks.callgraph.analyze_expr (tp, walk_subtrees, data); The docs say that analyze_expr is used for unrecognized tree codes. So if a front end will generate unrecognized tree codes, then it must define the analyze_expr hook. This explains the second part of this. If we see an unrecognized tree code, then there is no real need to check if analyze_expr is defined, because it must be defined. This means someone writing a language front end would get an ICE here, but an end user would not. This could be a bit nicer for a language front end writer, but this doesn't look like a serious problem. For the first part of this, we don't know for sure if we have any unrecognized tree codes, so we can't assume that the analyze_expr hook is defined. We must check first. This is an analysis from just looking at the code. I haven't tried to debug it and see what is really going on. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: GccPowerpc eabi HowTo - probem with stido functions ( sprintf)
moshed (sent by Nabble.com) wrote: / usr/local/bin/../powerpc-eabi/lib/libc.a(vfprintf.o)(.text+0x18a0): In function `_vfprintf_r': .../../../.././newlib/libc/stdio/vfprintf.c:1065: undefined reference to `__umoddi3' These are in the libgcc library. /usr/local/bin/../powerpc-eabi/lib/libc.a(vfprintf.o)(.text+0x18bc):../../../.././newlib/libc/stdio/vfprintf.c:1066: undefined reference to `__udivdi3' /usr/local/bin/../powerpc-eabi/lib/libc.a(makebuf.o)(.text+0x12c): In function `__smakebuf': .../../../.././newlib/libc/stdio/makebuf.c:96: undefined reference to `isatty' These are syscalls, which need to be provided by you. There are stubs you can use in libgloss. Only this example working well only without %d ? It?s insignificant for me sprintf (buff, "%s", "my_test"); If you compile with optimization, gcc will convert this to a string copy which needs no library calls. powerpc-eabi-ld -Map ADLAP.map -o ADLAP.elf appl.o main.o hardware.o init.o -T lnk_mpc555_rom.lcf -L c:/555/libumas -lumas -lm -lc Using ld to link is almost always a mistake. You should use gcc to link instead, and this will solve half your trouble, which is the failure to link in libgcc. If you really need to use ld directly, then use gcc -v to see what the correct linker command is, and then modify as necessary. The other half is probably a bug in your linker script, not including the necessary libgloss system call stubs which will work on the simulator. If you are running on a bare board, then you will need syscall stubs that call monitor routines. Some of these may not be supported on the board, in which case you can return an error, and just avoid calling them. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: non mainstream hardware.
Brian Makin wrote: Is there any need for people providing access to non mainstream/commercial hardware? There are various services for this that are already available. HP for instance has the testdrive program http://www.testdrive.hp.com You can get free access to various systems here. The web site seems to be rather slow today though. Similarly, Sourceforge has a Compile Farm. Most workstation vendors probably have similar systems available. A gcc bootstrap might be considered an abuse of some of these systems. You might want to ask about that before trying it. Still, access to systems is available if one has sufficient motivation. The real problem here is lack of motivation. If I don't know anyone using alpha-vma for instance, then I'm unlikely to volunteer to fix alpha-vms bugs. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: GccPowerpc eabi HowTo - probem with stido functions ( sprintf)
moshed (sent by Nabble.com) wrote: Following to your response I tried to add -v but doesn't succsed , maybe I locate on the wrong place. It is a gcc option. Just put it in CFLAGS, or just run gcc manually with -v added to your other command line options. also if u can replay me ragarding to the gcc powerpc building procedure (please see on the prvious mail) There is nothing wrong with your powerpc build. The only problems are with your customizations, e.g. your linker script, Makefile, etc. If you want to learn how to do this kind of stuff, try the cross-gcc mailing list on sourceware.org. Or try building a default powerpc toolchain, figuring out how it works, and then figure out how to modify it to do what you want. The simplest default is using the simulator. For powerpc, that means compiling with -msim. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Bug in install of gfortran for gcc-4.0.2
Rainer Emrich wrote: rm -f /appl/shared/gcc/Linux/i686-pc-linux-gnu/gcc-4.0.2/bin/; \ ln /appl/shared/gcc/Linux/i686-pc-linux-gnu/gcc-4.0.2/bin/gfortran /appl/shared/gcc/Linux/i686-pc-linux-gnu/gcc-4.0.2/bin/; \ Looking at gcc/fortran/Make-lang.in we see that the command here is rm -f $(DESTDIR)$(bindir)/$(GFORTRAN_TARGET_INSTALL_NAME)$(exeext); and we also see that GFORTRAN_TARGET_INSTALL_NAME is not defined anywhere. This code was copied from the install-driver rule in gcc/Makefile.in, and we can see that it defines GCC_TARGET_INSTALL_NAME to $(target_noncanonical)-gcc. However, looking at this, I see that the install-driver rule has changed significantly since it was copied into the fortran directory, and as a result, there are also other bugs here. For instance, fortran is setting GFORTRAN_CROSS_NAME from program_transform_cross_name, but this was deleted 2 years ago. So this whole fortran install rule needs to be rewritten, presumably by copying it again from the install-driver rule and modifying it appropriately. I think we need a PR here to keep track of this. The install rule starts with a "-", so it shouldn't have caused the install to fail, even though it is wrong. I haven't seen install failures because of this problem. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: dump CFG and callgraph
sean yang wrote: (1) if I want to dump a gimple tree representation of a program, where should I start to look at? And I read gcc internal manual, the control flow graph information is represented by BB data structure. If I want to walk through a control flow graph, where should I start to look at? -fdump-tree-all will dump gimple after every gimple optimization pass. Try looking at the output files, and try looking at the code that does the dumping. (2) Can I dump call-graph information to a file from gcc? It seems that -da doesn't give such a file. -fdump-ipa-all will dump the cgraph info. You will need to enable IPA optimizations to see this stuff. Try compiling with -O3. The cgraph code is used even without IPA, but apparently we don't have support for dumping it in that case. Probably an oversight. cgraph.c says it builds "intraprocedural optimization", which seems a bit confusing(considering that callgraph can be used as inlining etc.) Looks like a typo. English isn't the author's first language. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: weird installation problem on i686-pc-linux-gnu
Martin Reinecke wrote: i.e. the "gcc" binary ends up as "xgcc" in a subdirectory called "gcc". The gcc makefile install rule just does rm -f $destdir/bin/gcc install xgcc $destdir/bin/gcc If destdir/bin/gcc is non-existant, or a plain file, then this works as expected. If destdir/bin/gcc is a directory, then this will cause the result you saw, as xgcc will be copied into the directory. The question is then: Who created the directory? That I probably can't figure out without makefile logs. It is also possible that the directory was there before you started the install. In this case, you might want to do a rm -rf $destdir to get a clean install. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Does gcc-3.4.3 for HP-UX 11.23/IA-64 work?
Albert Chin wrote: The "*libgcc" line from the 3.4.3/3.4.4 specs file: *libgcc: %{shared-libgcc:%{!mlp64:-lgcc_s}%{mlp64:-lgcc_s_hpux64} %{static|static-libgcc:-lgcc -lgcc_eh -lunwind}%{!static:%{!static-libgcc:%{!shared:%{!shared-libgcc:-lgcc -lgcc_eh -lunwind}%{shared-libgcc:-lgcc_s%M -lunwind -lgcc}}%{shared:-lgcc_s%M -lunwind%{!shared-libgcc:-lgcc} It looks like there is a close-brace '}' missing after the -lgcc_s_hpux64. This will terminate the shared-libgcc rule before the static rule starts. Then delete one of the 4 close braces after the -lunwind. There are one too many braces here. I don't see this problem in the FSF gcc tree. I'm guessing this is a mistake in the HP gcc sources that you are using. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: copyright assignement
Pierre-Matthieu anglade wrote: I'd like to contribute to the development of gfortran and for that, it appears that filling a copyright assignment form is mandatory. Can someone tell me where to get this? You can start with the form in http://gcc.gnu.org/ml/gcc/2003-06/msg02298.html -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Unwinding through signal handlers on IA-64/Linux
Eric Botcazou wrote: It works if the unwind library is HP's libunwind (aka system libunwind) but doesn't if the unwind library is the bundled one (config/ia64/unwind-ia64.c). Is this the David Mosberger libunwind that you are referring to? As far as I know, the actual HP libunwind only supports HPUX. The one David Mosberger wrote is different from the HP libunwind implementation, and is free software that supports linux. Anyways, I strongly recommend using David Mosberger's libunwind implementation. I consider any ia64 linux machine which doesn't have it installed to be broken. The gcc libunwind is probably never going to be as good as the one David Mosberger wrote. That said, it would be nice to fix the gcc unwinder if we can. The code in question is in Jakub's patch that added the MD_HANDLE_UNWABI support. So it has been there since the beginning for this macro. http://gcc.gnu.org/ml/gcc-patches/2003-12/msg00940.html I find it curious that there are some unexplained differences between the MD_HANDLE_UNWABI and MD_FALLBACK_FRAME_STACK_FOR macros. The latter supports fpsr for instance, and the former does not. There is also a difference for psp. And the code at the end of each macro is difference. It is this last bit you are complaining about. Some of these differences are probably bugs. Maybe changes to the linux kernel and/or glibc have affected this code, in which case we may need different versions for different kernel/glibc versions. Or maybe smarter code that works with every linux kernel and/or glibc version. I'm just guessing here. There is probably a reason why Jakub included that code in his patch, but I don't know offhand what it is. I've never looked at the low level details of the unwinder, and how it interacts with signal stack frames, so I'm not sure how much help I can be here. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Does gcc-3.4.3 for HP-UX 11.23/IA-64 work?
Albert Chin wrote: I set ':set showmatch' in vim and all the braces match up. Yes, the braces match up. You wouldn't have been able to build gcc if they didn't. However, they are still wrong. One of the braces is in the wrong place, and has to be moved. It looks like someone tried to modify the libgcc specs, got a build error, then added a brace to fix it, but mistakenly added the brace in the wrong place. As mentioned before, there is a brace missing after the gcc_s_hpux64. This brace is needed to close off the shared-libgcc rule before the static-libgcc rule starts. You then must delete a brace from the end of the !static rule which has one too many. The libgcc spec in the FSF gcc sources do not match the one you have given, so this appears to be a bug in patches that you or HP have added to gcc. Alternatively, it could be a miscompilation problem, but that seems rather unlikely. The above is from gcc-3.4.3 + some patches. However, the HP 3.4.4 binary has the same "*libgcc" line. Do you have a GCC 3.4.x binary on HP/IA-64 which works correctly with -shared? I do not have an ia64-hpux machine. I have an HP loaner, but not a copy of HPUX, so my machine is only running linux. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Does gcc-3.4.3 for HP-UX 11.23/IA-64 work?
Steve Ellcey wrote: I am not convinced there is a bug here. There is an extremely obvious bug here. Please look at the specs that Albert Chin included in his email message. There is no way that -static-libgcc should require -shared-libgcc, which is what happens in his specs. The only part I don't understand is where these specs came from, as this doesn't match anything in the FSF tree. I'm guessing that HP is distributing a modified gcc with patches added to it, and these patches are buggy. I went to the HP web site, and I see that the gcc sources there are in a depot file. Can I do anything with a depot file even though I don't have HPUX? I haven't tried downloading the file to check. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: new operator in gcc-3.4
Lars Callenbach wrote: v = new double **[100]; operator new[]() -> operator new() -> malloc () -> _int_malloc() Without a testcase we can compile, we probably can't do anything except point out that a malloc failure is probably not due to a gcc problem. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: can DECL_RESULT be 0?
Rafael Ávila de Espíndola wrote: DECL_RESULT holds a RESULT_DECL node for the value of a function, or it is 0 for a function that returns no value. (C functions returning void have zero here.) I looked at gcc-1.42, and even there, a DECL_RESULT always holds a RESULT_DECL. It can never be zero. However, the DECL_RTL of this RESULT_DECL is zero for a function that returns no value. I'm not sure if this is a typo in the tree.def file, or whether perhaps an implementation change was made a very long time ago. Either way, this comment as written is wrong, and has been for a very long time. We could perhaps drop the comment about 0 values, or maybe expand it to say that the DECL_RTL of the RESULT_DECL is 0 for functions that return no value. aggregate_value_p doesn't look at DECL_RTL (DECL_RESULT (...)) so there is no problem there. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: strange result when compiling w/ -fpreprocessed but w/out -fdumpbase
Joern RENNECKE wrote: When you compile a file that contains a line directive, e.g.: using the -fpreprocessed option to cc1, but without -fdumpbase, the base filename of the line number directive us used both for the assembly output file and for debugging dumps from -da. This is probably a natural consequence of the fact that cpplib got split into a separate library. So now we need some way to communicate info from cpp to cc1, and this is done via a line directive. And at the least, the -fpreprocessed documentation is wrong when it states that this option is implicit when the file ends in .i; this effect of -fpreprocesed only appears when the option is actually passed to cc1. Try "touch tmp.i; ./xgcc -B./ -v tmp.i" and note that -fpreprocessed is passed by default to cc1. The docs aren't wrong here, you just missed the fact that there is a hidden option in the specs. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: [gfortran] Second try: Problem parsing hexadecimal constants?
Ioannis E. Venetis wrote: I sent this message about a week ago, but didn't get any response. So, I try again. My question is whether this is a bug of gfortran and if I should open a bug report about it. I haven't found this in Bugzilla. Yes, go ahead and create a bug report, and mention that this is a regression from f77. This list is primarily for developers, not for users, and hence isn't always "user friendly". You won't always get an answer to questions posted here. Also, check out the fortran mailing list mentioned on the gcc.gnu.org web site. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: arm-rtems Ada Aligned_Word compilation error
Joel Sherrill <[EMAIL PROTECTED]> wrote: s-auxdec.ads:286:13: alignment for "Aligned_Word" must be at least 4 Any ideas? I'm guessing this is because ARM sets STRUCTURE_SIZE_BOUNDARY to 32 instead of 8, and this confuses the Ada front end. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: [Treelang] flag_signed_char
Rafael Espíndola wrote: Why does treelang defines signedness of char with flag_signed_char? IMHO it would be better if it had a fixed definition of it. I have tried to use Signedness of char depends on the OS. If you want compatibility with C, and in particular, the standard C library, then you should use the OS default, which is obtained from flag_signed_char. flag_signed_char can also be overriden by user options, but this is not the main reason for using it. Did you try testing this on a system where "char" defaults to unsigned char? It is possible that the patch might work fine on a signed-char system, but fail on an unsigned-char system. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: fixincludes make check broken?
Andreas Jaeger wrote: Running make check in fixincludes on x86_64-gnu-linux I get the following failure: Just grepping for "_STRING_INCLUDED" it is easy to see the input rule in inclhack.def that defines this transformation, and the output rule in fixincl.x that actually does the transformation. This seems to be a generic problem. Anything using the "replace = " type rule is ending up with a missing newline on the last line. Looking at gcc-3.3.4, I see that the trailing newline is there in the fixincl.x file, but it isn't there in mainline. Offhand, I don't know what changed here. It could be that autogen changed, it could be that something in fixincluded changed, or it could even be that someone checked in the result of a broken autogen. If this is due to an autogen change, then perhaps the simplest fix is to change write_replacement in fixincl.c to emit the missing trailing newline after the fputs. I think the next step is to try to figure out if an autogen change broke this, or if a fixincludes change broke this. I'd suggest opening a PR to track this unless you want to volunteer to do this work. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: dwarf2 basic block start information
mathieu lacage wrote: Clearly, 0x11 is not a bb boundary so we have a bug. Looks like it could be the prologue end, but I don't see any obvious reason why this patch could do that. I suggest you try debugging your patch to see why you are getting the extra call with LINE_FLAG_BASIC_BLOCK set in this case. Using -p would make the diff more readable. We get complaints every time the debug info size increases. Since this is apparently only helpful to an optional utility, this extra debug info should not be emitted by default. There should be an option to emit it. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: apps built w/ -fstack-protector-all segfault
Peter S. Mazinger wrote: -fno-stack-protector-all is not recognised/implemented You could just submit this as a bug report into bugzilla. apps built w/ -fstack-protector-all segfault You will have to give us more info. Most gcc developers probably don't have a copy of UClibc, and plus it sounds like you have made gcc changes that weren't included in your message. So there isn't much we can do here except ask for more details. Try debugging the problem. If you can identify a specific problem here, and give us details about it, we can probably help. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Incorrect default options for h8300 target
Mike Lerwill wrote: #undef TARGET_DEFAULT_TARGET_FLAGS #define TARGET_DEFAULT_TARGET_FLAGS (MASK_QUICKCALL) This is mostly right, except that second line should be #define TARGET_DEFAULT_TARGET_FLAGS (TARGET_DEFAULT) Alternatively, we could delete the 2 lines defining TARGET_DEFAULT in h8300.h, and then we can define it the way you suggested. However, we then lose some configurability here, as then subtargets can't override the default. So probably better to do what I described. The patch needs to be tested before it can be checked in. I'll try a build and see what happens. Well, maybe not. My subversion check-out is screwed up, and I don't see how to fix it. An update failed because of a bug with my external diff program. I fixed that. I fumbled around a bit trying to find the right svn command I need to recover from this. Now I've got a locked working copy, and I'm getting errors from svn cleanup. I need something equivalent to "cvs update -A" here. I know my tree is screwed up, I just want svn to ignore everything I have in my working copy and give me the most recent copy of everything. I think I need to check out a new tree from scratch at this point. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: dwarf2 basic block start information
Mathieu Lacage wrote: svn diff -x -p does not work here. Is there a magic incantation I should run to produce such a diff ? There are some instructions in the gcc wiki about how to do this. The gcc wiki is accessible from our home page, gcc.gnu.org. svn uses a built-in diff that doesn't support -p, so you need to tell svn to use an external diff program which is a shell script that calls GNU diff. Any suggestion on a name ? You could make this conditional on -g3 as a start. -g2 is the default. See the debug_info_level stuff. If you are serious about submitting this as a patch, there are a number of requirements that must be fulfilled. We need an FSF GCC copyright assignment. We need test results, for a debug patch, that would include both gcc and gdb testsuite results. The patch needs to be sent to the gcc-patches list. But first you need to get the patch working reliably. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: question about gcc
Paul Albrecht wrote: Is there some reason gcc hasn't been or can't be enhanced to provide output for a cross-referencing programs? FYI, there are a number of tools available for producing cross-referencing info. See for instance http://www.gnu.org/software/global/links.html and try looking at the up-link to global itself also. I found this by searching on the GNU web site. I've never used global myself. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: can DECL_RESULT be 0?
Rafael Ávila de Espíndola wrote: Thank you very much for showing that the problem was in the comment. I've checked in a patch to fix the comment typo. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Undelivered Mail Returned to Sender
Gabriel Dos Reis wrote: This is the fifth or so message from me, within the last few days, that gets rejected. What is up? Hmm, I don't see this in the overseers archive. I don't think it reached them. Maybe it triggered the spam filter for having too many capital letters in the subject line. Anyways, the overseers aren't sure what the problem is, but they are testing a newer version of the mailer software in the hope that it will fix the problem. You aren't the only person seeing this problem. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Undelivered Mail Returned to Sender
Jim Wilson wrote: Gabriel Dos Reis wrote: This is the fifth or so message from me, within the last few days, that gets rejected. What is up? Hmm, I don't see this in the overseers archive. Because it is sourceware.org not sourceware.com. I should have noticed that before I made the same mistake. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: failed to run testsuite for libstdc++ on x86_64-unknown-linux-gnu for target unix/-m32
Rainer Emrich wrote: ERROR: could not compile testsuite_shared.cc This is the important bit. The libstdc++ testsuite tried to compile a support file and failed, so it generated an error. The rest is just a tcl backtrace which we don't need. The real question here is why it failed. There should be useful info in the $target/libstdc++/testsuite/libstdc++.log file. You probably got a linker error. One obvious question is to check to see whether you have the optional 32-bit libraries installed. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Target processor detection
Piotr Wyderski wrote: I am working on a portable low-level library of atomic operations, Like the existing libatomic-ops package? Why does __sparc_v9__ depend on the number of bits instead of the -mcpu? Is this a GCC bug? I've found an e-mail by Jakub Jelinek, which claims, that Jakub was probably describing how it works on linux. On Solaris, Sun sets the standard, and we have to follow Sun's lead here. It appears that the Sun compiler only defines __sparc_v9 for 64-bit compiles, so gcc must do so also. This is done in the gcc/config/sparc/sol2-bi.h file which is only used for solaris2.7 and up. If this assumption about the Sun compiler is wrong, then this is a gcc bug. Otherwise, no. -mcpu=i386 => __i386 = 300 I think this could get confusing very quickly. What values do we use for AMD parts? What values do we use for PentiumD parts which have 3-digit part numbers that conflict with this scheme? This info may also not be accurate enough to be useful in the end. Different pentium4 cores have different sets of features. So just knowing that something is a P4 isn't good enough to know what instructions exist. You have to have run time checks to read system registers that contain info about what hardware features are present. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: A question about having multiple insns for one operation
Kazu Hirata wrote: I have a question about having multiple insns for one operation. Take m68k port for example. It has many SImode move insns like so: Now, is it OK to have this many? It isn't ideal. It can work if you aren't doing things that will cause reload to fail. I suspect the real problem here is just that the m68k.md file is very old, and hasn't been updated to reflect current practices. The movsi_const0 pattern for instance only has one constraint, and it is "g" which matches any operand. So there is no chance of a reload failure here. It would be better if this was part of the normal movsi pattern. It looks like it used to be redundant up until the 68040 support was added. The pushexthisi_const is actually a push insn not a move insn. Note that we have both push_optab and mov_optab. So it is OK to have patterns to match a push insn which are separate from patterns to match a move insn, provided that both are complete. Nowadays, though, it would probably be best to use a define_expand for a pushsi pattern instead of a define_insn, thereby avoiding the need for two similar patterns. The m68k.md port predates the existence of define_expand. However, this isn't a named pushsi pattern anyways, so it isn't clear why it is here. I would guess that it started life as a pushsi pattern, was renamed to avoid its use, and never got cleaned up. Separate patterns for lea are common, in the old way of doing things. The i386.md file still has separate lea patterns for instance, though they come before add in the i386.md file, and they appear after add in the m68k.md file. The m68k.md one will probably only match for complicated addressing modes, like indexed addresses. The mismatched modes is curious and looks wrong. Maybe it will never match anymore? -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Gcc help pages about __mode__ keyword
Anton Soppelsa wrote: I wasn't able to find information about "__DI__" (on the gcc manual pages). The modes are defined in the internals documentation. http://gcc.gnu.org/onlinedocs/gccint/Machine-Modes.html#Machine-Modes By the way, do they mean Double-, Single-, Half-, Quarter-, -Integer? Essentially, but the "Integer" here isn't the same as the C "int" keyword, as this depends on some compiler internal defines, which can't easily be documented for end users. The attribute mode syntax was invented for use in some gcc library files, and isn't really meant for end users. You would be better off using types defined in stdint.h or inttypes.h such as int64_t. A kernel is a special case, as a kernel can't rely on C library features like stdint.h, and hence may need to rely on gcc internal stuff like attribute mode. Bugs should be filed into our bugzilla database, if you want action taken on them. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: __thread and builtin memcpy() bug
Frank Cusack wrote: See http://bugzilla.redhat.com/bugzilla> for instructions. The bug is not reproducible, so it is likely a hardware or OS problem. - the bug is quite reproducible, why does gcc say otherwise? This is due to a patch that Red Hat has added to the FSF gcc sources. When a Red Hat gcc release gets an ICE, it tries re-executing the command, to see if it fails again. If it doesn't, then that usually indicates a hardware problem. In this case, it may mean a bug in the Red Hat patch, which is something you would have to report to Red Hat. We don't support Red Hat gcc releases here, only FSF ones. As for the underlying bug, the ICE, I can reproduce this with FSF gcc-3.4.x sources. It would be OK to report that bug into the FSF gcc bugzilla database. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: m68k exception handling
Kövesdi György wrote: I built an environment for my 68020 board using gcc-4.0.2 and newlib-1.13.0. Everything seems good, but the exception handling is not working. Getting EH to work for a newlib using target board may be complicated. How EH works depends on whether you are using DWARF2 unwind info, or builtin setjmp and longjmp. The builtin setjmp and longjmp approach is easier to get working, but has a higher run time overhead when no exceptions are thrown. In this scheme, we effectively execute a builtin setjmp everytime you enter an EH region, and a throw is just a builtin longjmp call. This should work correctly if builtin_setjmp and builtin_longjmp are working correctly. See the docs for these builtin functions. The DWARF2 unwind info method has little or no overhead until a exception is thrown. This is the preferred method for most targets. In this scheme, we read the DWARF2 unwind info from the executable when an exception is throw, parse the unwind tables, and then follow the directions encoded in the unwind tables until we reach a catch handler. This approach has obvious problems if you are using a disk-less OS-less target board. This approach also generally requires some C library support, which is present in glibc, but may not be present in newlib. You can find info on this approach here http://gcc.gnu.org/ml/gcc/2004-03/msg01779.html -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: GCC-3.4.5 Release Status
Gabriel Dos Reis wrote: At the moment, we have only one bug I consider release critical for 3.4.5. middle-end/24804 Produces wrong code I put an analysis in the PR. It is a gcse store motion problem. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: unable to find a register to spill in class
Nemanja Popov wrote: .../../gcc/libgcc2.c:535: error: unable to find a register to spill in class GR_REGS Reload is one of the hardest parts to get right with an initial port. You will probably have to spend some time learning the basics of what reload does. There are many different things that could be wrong here. It is hard to provide help without some additional info about the port. The place to start is in the greg file. There should be a section near that top that says "Reloads for insn # 6". This will be followed by a description of the reloads that were generated for this insn. Also, of interest here is the movsi_general pattern, particularly its constraints. And the GR_REGS class, how many registers are in it, how many are allocatable, etc. You may need to set breakpoints in reload to debug this. Put a break point in find_reloads() conditional on the insn uid if the problem appears to be there. Also, check find_regs and figure out why it failed. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: GCC-3.4.5 Release Status
Gabriel Dos Reis wrote: I would need an ia64 maintainer to comment on this. From There isn't enough ia64 maintainer bandwidth to provide detailed comments on testsuite results on old machines with old tools versions. Basically, it is only me, and I'm also trying to do a hundred other things in my free time, most of which aren't getting done. So I'm not going to care unless something important is broken, like a gcc bootstrap, glibc build, or linux kernel build. I wouldn't worry unless the results are worse than gcc-3.4.0 on the same machine. I have no such results to compare against, and can't generate them. I can find gcc-3.4.0 results for a suse system on the mailing list, but I'm not sure if that tells me anything useful. Some of these probably need an updated binutils to fix. Some of these are problems that weren't fixed until just before the gcc-4.0 release. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: MIPS: comparison modes in conditional branches
Adam Nemet wrote: Now if I am correct and this last thing is really a bug then the obvious question is whether it has anything to do with the more restrictive form for conditional branches on MIPS64? And of course if I fix it then whether it would be OK to lift the mode restrictions in the conditional branch patterns. Yes, the last bit looks like it could be a bug; a missing use of TRULY_NOOP_TRUNCATION somewhere. This isn't directly related to the current situation though. The MIPS port was converted from using CC0 to using a register for condition codes on April 27, 1992. The mistaken use of modes in branch tests occured at that time. This happened between the gcc-2.1 and gcc-2.2 releases. This was long before the 64-bit support was added. When the 64-bit support was added later, the mistaken branch modes were expanded to include SImode and DImode variants. Since this occured long ago, it would be difficult to determine exactly why it was done this way. It was perhaps just done that way because it looked obviously correct. Yes, it looks like fixing the combiner problem would make it possible to remove the mistaken mode checks. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: more strict-aliasing questions
Jack Howarth wrote: Is there some place where all the existing forms of strict-aliasing warnings are documented? So far I have only seen the error... We don't have such documentation unfortunately. There are 3 errors. There is the one you have seem. There is a similar one for incomplete types, which says "might break" instead of "will break" because whether there is a problem depends on how the type is completed. There is also a third error which occurs only with -Wstrict-aliasing=2, which again says "might break" instead of "will break". This option will detect more aliasing problems than just -Wstrict-aliasing, but it also may report problems where they don't exist. Or in other words, for ambiguous cases where the compiler can't tell if there may or may not be an aliasing problem, -Wstrict-aliasing will give no warning, and -Wstrict-aliasing=2 will give a warning. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: problem with gcc 3.2.2
Mohamed Ghorab wrote: linux, it tries to compile some files but outputs the following error: /usr/include/c++/3.2.2/bits/fpos.h:60: 'streamoff' is used as a type, but is not defined as a type. This is a more appropriate question for the gcc-help list than the gcc list. The gcc list is primarily for developers. In order to help you, we will likely need more info, such as a testcase we can compile to reproduce the problem, and info about your operating system. See the info on reporting bugs at http://gcc.gnu.org/bugs.html This is probably more likely user error than a gcc bug, but we need the same kind of info to help with user errors. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Problem with bugzilla account
Eric Weddington wrote: I have a problem with making an email change for my bugzilla account. sysadmin requests can be sent to [EMAIL PROTECTED] -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: ARM spurious load
Shaun Jackman wrote: The following code snippet produces code that loads a register, r5, from memory, but never uses the value. You can report things like this into our bugzilla database, marking them as enhancement requests. We don't keep track of issues reported to the gcc list. I took a quick look. The underlying problem is that the arm.md file has an iordi3 pattern, which gets split late, preventing us from recognizing some optimization chances here. If I just disable the iordi3 pattern, then I get much better code. ldr r0, .L3 mov r1, r0, asr #31 orr r1, r1, #34603008 @ lr needed for prologue bx lr Disabling this pattern may result in worse code for other testcases though. It was presumably added for a reason. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Torbjorn's ieeelib.c
Mark Mitchell wrote: That message contains an IEEE floating-point emulation library, like fp-bit.c. Howeve, the performance is considerably better; Joseph measured against fp-bit.c with a modern compiler, and ieeelib.c is about 10-15% better than the current code on EEMBC on a PowerPC 440. For the record, there is also FP emulation code in the gdb simulator. See the gdb src/sim/commom/sim-fpu.c file. This code was originally taken from gcc's fp-bit.c, and has since been significantly improved both for speed and accuracy. It is quite a bit better than what we have in fp-bit.c. This is another option worth investigating. I don't know how it compares to the glibc code or Torbjorn's code. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Not usable email content encoding
I'm one of the old timers that likes our current work flow, but even I think that we are risking our future by staying with antiquated tools. One of the first things I need to teach new people is now to use email "properly". It is a barrier to entry for new contributors, since our requirements aren't how the rest of the world uses email anymore. LLVM has phabricator. Some git based projects are using gerrit. Github and gitlab are useful services. We need to think about setting up easier ways for people to submit patches, rather than trying to fix all of the MUAs and MTAs in the world. Jim
Re: Modifying RTL cost model to know about long-latency loads
On Sat, Apr 11, 2020 at 4:28 PM Sasha Krassovsky via Gcc wrote: > I’m currently modifying the RISC-V backend for a manycore processor where > each core is connected over a network. Each core has a local scratchpad > memory, but can also read and write other cores’ scratchpads. I’d like to add > an attribute to give a hint to the optimizer about which loads will be remote > and therefore longer latency than others. GCC has support for the proposed named address space extension to the ISO C standard. You may be able to use this instead of defining your own attributes. I don't know if this helps with the rtx cost calculation though. This is mostly about support for more than one address space. See "Named Address Spaces" in the gcc internals docs, and the *_ADDR_SPACE_* stuff in the sources. The problem may be one similar to what Alan Modra mentioned. I would suggest stepping through the cost calculation code in a debugger to see what is happening. Jim
Re: Modifying RTL cost model to know about long-latency loads
On Thu, Apr 16, 2020 at 7:28 PM Sasha Krassovsky wrote: > @Jim I saw you were from SiFive - I noticed that modifying the costs for > integer multiplies in the riscv_tune_info structs didn’t affect the generated > code. Could this be why? rtx_costs is used for instruction selection. For instance, choosing whether to use a shift and add sequence as opposed to a multiply depends on rtx_cost. rtx_cost is not used for instruction scheduling. This uses the latency info from the pipeline model, e.g. generic.md. It looks like I didn't read your first message closely enough and should have mentioned this earlier. Changing multiply rtx_cost does affect code generation. Just try a testcase multiplying by a number of small prime factors, and you will see that which ones use shift/add and which ones use multiply depends on the multiply cost in the riscv_tune_info structs. This also factors into the optimization that turns divide by constant into a multiply. When this happens depends on the relative values of the multiply cost and the divide cost. Jim