Re: Using up double diskspace for working copies (Was Re: Details for svn test repository)
Daniel Berlin wrote: You can't mix svn and svk commits against the same repo. It confuses svk (not svn). You can use svk readonly, of course. Actually, that's not quite right. While svk's depot must only be used by svk, the usual usage is to mirror a regular subversion repository with svk into a svk depot, then work with it from there using svk. Any changes in the svn repository are pulled in with svk sync, and any changes to the mirrored copy are applied to the backing subversion repository. Except that http://svk.elixus.org/?SVKFAQ says "Given an svk repository, do you have to use it via svk, or can you use svn programs to access it? Short answer: svn programs that only read the repository are working fine; those who actually write in the repository would bypass svk and make it fail. " Vital difference: This will work: svk mirror svn://wherever //svnrepo/mirror # do svk things with //svnrepo/mirror # do svn things with svn://wherever This won't work: svk mirror svn://wherever //svnrepo/mirror # do svk things with //svnrepo/mirror # do svn things wioth file://$HOME/.svk/local/svnrepo/mirror signature.asc Description: OpenPGP digital signature
[RFC] fold Reorganization Plan
Hi, I am planning to reorganize fold as suggested by Roger Sayle on IRC. The shortest way to describe this mini project would be to develop the tree equivalent of simplify_gen_ARITY and simplify_ARITY in the RTL world. Doing so should reduce the number of scratch tree nodes created when idioms like fold (buildN (...)) are used. Hopefully, we should be able to pick up some compile-time improvement along the way. Step 1 -- Split fold into fold_unary, fold_binary, etc. Make fold a simple dispatch function into fold_unary, fold_binary, etc. The interfaces are kept exactly the same. We would pass one tree as the only argument. Each function returns the folded tree or the original tree. No change to the external interface (outside of fold-const.c) is made. Step 2 -- Eliminate the direct use of the original tree. For example, fold currently has things like fold_binary_op_with_conditional_arg (t, ...) and fold_range_test (t) These functions, as they stand, would not work without the original tree "t". We need to change their interfaces so that they will work with decomposed arguments like code, type, op0, and op1. Again, no change to the external interface (outside of fold-const.c) is made. Step 3 -- Change fold_unary, fold_binary, etc, so that they will return NULL_TREE instead of the original tree when no folding is performed. Change fold accordingly so that it will still return either the original tree or the folded tree (but not NULL_TREE). Again, no change to the external interface (outside of fold-const.c) is made. Step 4 -- Change fold_unary, fold_binary, etc so that they will take decomposed arguments like code, type, op0, op1. At this point, fold_ARITY functions should be just like their RTL equivalent, simplify_ARITY. Change fold accordingly. Again, no change to the external interface (outside of fold-const.c) is made. Step 5 -- Provide fold_buildN as extern functions. Step 6 -- Convert fold (buildN (...)) to fold_buildN. This is very mechanical but very disturbing at the same time. We need to coordinate thing first with various people, especially those maintaining branches. One thing I can say is that converting a little by little would be even more disturbing than the one-shot conversion as people with patches spanning several files may have to adjust their patches several times. Step 7 -- Export fold_ARITY. Step 8 -- Look for places where we can use fold_ARITY and convert them. Step 9 -- Enjoy the result and continue to hack GCC. :-) Summary --- The point is to do as much cleanup and reorganization as possible without changing the external interface before making the big conversion. By the way, the past proposals from Roger Sayle are found at: http://gcc.gnu.org/ml/gcc-patches/2003-10/msg01514.html http://gcc.gnu.org/ml/gcc/2004-01/msg00560.html Both of these are along the same lines as above. Any comments? Kazu Hirata
Re: Using up double diskspace for working copies (Was Re: Details for svn test repository)
> > > > You can't mix svn and svk commits against the same repo. It confuses svk > > (not svn). > > > > You can use svk readonly, of course. > > Actually, that's not quite right. While svk's depot must only be used by > svk, the usual usage is to mirror a regular subversion repository with > svk into a svk depot, then work with it from there using svk. Any > changes in the svn repository are pulled in with svk sync, and any > changes to the mirrored copy are applied to the backing subversion > repository. > Except that http://svk.elixus.org/?SVKFAQ says "Given an svk repository, do you have to use it via svk, or can you use svn programs to access it? Short answer: svn programs that only read the repository are working fine; those who actually write in the repository would bypass svk and make it fail. "
Re: Details for svn test repository
On Friday, February 11, 2005, at 05:29 PM, Daniel Berlin wrote: I'll keep the last branchpoint of each branch for the initial import Won't work either... Sometimes we reuses merge labels in non-obvious ways. top-200501-merge and top-200502-merge both exist, the two were used for, say, treeprofiling, and then a random other (important) branch uses the first for its merge. Also, even if you could track those down (you can't), it still would obliterate merge auditing, which is a very useful feature to find how exactly how someone screwed up a past merge. I don't see the advantage of wiping those labels yet. If you left all labels mentioned in any log entry, that would almost solve most instances that I know about, but, sometimes people misspell the tags in obvious ways in the log messages.
Re: Details for svn test repository
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Joern RENNECKE wrote: | Daniel Berlin wrote: | |> |> And towards this end,i'm working on making blame a lot faster |> |> | | Will this also cover annotate using an -r option to go past the last | reformatting | delta? | |> Other than that, what operations are people still worried about the |> slowness of? |> |> | | Because svn keeps an extra pristine copy for checkouts, I'll have to use | svn export for | automatic regression tests. With cvs, the overhead of the cvs | information is small, | so I could use checkouts, and when I wanted to work on / test some | patches with a | baseline known to build and to have a specific set of regressions, I | just copied over the | checked out tree and started working on it. With svn, I'll have to do a | fresh checkout of | the files/directories I'll be working on. The book mentions that there | is an intent to make | the extra pristine copy optional, but AFAICT this isn't something that | is available now. Can't you use a single checkout with svn switch, or patch it and svn revert when done? Alternately, use svk, and create local branches for whatever changes you want to save. -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.0 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFCDOe3+hz2VlChukwRAl8hAKCJdAQ5XJmgVuSiH2vx1q9kvGf7IQCeMKhO C7C9WYy8IKz4lwfH++MRdvQ= =VU95 -END PGP SIGNATURE-
Re: Moving to an alternate VCS
On Fri, Feb 11, 2005 at 01:49:34PM +, Thorsten Glaser wrote: > I've always found the FSF's ChangeLog policy a bit weird > (for CVS projects - for RCS projects it's understandable). The ChangeLog fulfills a sometimes-ignored legal requirement of the GPL: > 2. You may modify your copy or copies of the Program or any portion > of it, thus forming a work based on the Program, and copy and > distribute such modifications or work under the terms of Section 1 > above, provided that you also meet all of these conditions: >a) You must cause the modified files to carry prominent notices >stating that you changed the files and the date of any change. The ChangeLog, maintained in the standard way for FSF projects, fulfills this requirement when it accompanies the sources. But it, or something like it, has to be there. (we could nitpick here; the ChangeLog is a separate file, but normally it goes with the distribution so we're covered). Now, we could instead auto-generate the ChangeLog from Subversion history entries, since the information contained there is the same. But we have to wind up with something like a ChangeLog in any distributed version of GCC. It might make the most sense to go the auto-generation route, and then the standard for checkin comments would be to use the ChangeLog format. The ChangeLog can then be generated by just appending the entries together, and tacking the "legacy ChangeLog" onto the end.
Re: 3.4.3 C++ parsing bug?
On Fri, 11 Feb 2005, Karel Gardas wrote: > On Fri, 11 Feb 2005, Jan Reimers wrote: > > > Can someone verify that this is valid C++ before I submit a bug report: > > > > // test.C > > template class A {static T* c;}; > > > > class B : public A {}; > > > > B* A::c=0; > > // end test.C > > > > At least Comeau C++ 4.3.3 and Intel C++ 8.0 compile it and to me it also > looks ok, but I'm not at all C++ language lawer! Thanks to Joe Buck's note, I've found that I've compiled the code with default, i.e. not so ANSI C++ strict options. I can just confirm that both Comeau and Intel also fail to compile the code above with proper more strict options (-A for como, -ansi for icpc). Thanks, Karel -- Karel Gardas [EMAIL PROTECTED] ObjectSecurity Ltd. http://www.objectsecurity.com
Re: Moving to an alternate VCS
Joe Buck dixit: >On Fri, Feb 11, 2005 at 01:49:34PM +, Thorsten Glaser wrote: >> I've always found the FSF's ChangeLog policy a bit weird >> (for CVS projects - for RCS projects it's understandable). > >The ChangeLog fulfills a sometimes-ignored legal requirement of the GPL: Sure, but other projects do it differently. On every checkin, an eMail is generated and an entry appended to an automatically rotated ChangeLog file. (Thanks to the (neglected by the other BSDs) update to GNU cvs 1.12, I can even add changeset-like facilities.) >But we have >to wind up with something like a ChangeLog in any distributed version of >GCC. Sure, it's a change. I was just trying to hint people that it might be worthwhile to think about it, and a bit curious myself what the GNU developers say about that. bye, //mirabile
Re: [RFC] fold Reorganization Plan
On Feb 12, 2005, at 12:06 AM, Kazu Hirata wrote: Any comments? I like this change. -- Pinski
Re: Using up double diskspace for working copies (Was Re: Details for svn test repository)
Daniel Berlin wrote: > >> > >> > You can't mix svn and svk commits against the same repo. It confuses >> > svk (not svn). >> > >> > You can use svk readonly, of course. >> >> Actually, that's not quite right. While svk's depot must only be used by >> svk, the usual usage is to mirror a regular subversion repository with >> svk into a svk depot, then work with it from there using svk. Any >> changes in the svn repository are pulled in with svk sync, and any >> changes to the mirrored copy are applied to the backing subversion >> repository. >> > > Except that http://svk.elixus.org/?SVKFAQ > > says "Given an svk repository, do you have to use it via svk, or can you > use svn programs to access it? > > Short answer: svn programs that only read the repository are working > fine; those who actually write in the repository would bypass svk and > make it fail. > " Right - using svn programs to directly modify the svk depot (which is it's local 'repository'), is touchy. You *can* do it, but you have to be quite careful about the svk:* properties used to track merges and mirrors. Generally there's no need, other than perhaps using a read-only client to make your local work visible to others prior to pushing it upstream. However, none of this doesn't mean you can't use svk as your local client with a normal svn repository mirrored into the depot. In fact, it's probably the most common use of svk. This is *not* a readonly setup, as svk knows how to push commits through and back to upstream. Obviously this can entail conflict resolution if your local mirror has become a local branch, since svk also allows you to make commits that you haven't pushed back up yet, but that's not really different than still having it all loose WC without a local VCS.
Re: Details for svn test repository
On Fri, 2005-02-11 at 18:40 -0800, Mike Stump wrote: > On Friday, February 11, 2005, at 05:29 PM, Daniel Berlin wrote: > > I'll keep the last branchpoint of each branch for the initial import > > Won't work either... Sometimes we reuses merge labels in non-obvious > ways. top-200501-merge and top-200502-merge both exist, the two were > used for, say, treeprofiling, and then a random other (important) > branch uses the first for its merge. > > Also, even if you could track those down (you can't), it still would > obliterate merge auditing, which is a very useful feature to find how > exactly how someone screwed up a past merge. > > I don't see the advantage of wiping those labels yet. > > If you left all labels mentioned in any log entry, that would almost > solve most instances that I know about, but, sometimes people misspell > the tags in obvious ways in the log messages. > Fine, i'll just keep all the non-snapshot tags for now.
Trikke at the LA Marathon
For the last four years, Trikke has participated in LA Marathons Acura Bike Tour; now were inviting you to join us. Early Sunday morning March 6th, the LA Marathon course is thick with cyclists for the 23.5 mile fun ride. LA's city streets (Exposition, Venice, Wilshire, Olympic, Fairfax, Vermont, and Figueroa) are all yours to carve and ride on only one beautiful spring morning a year. This year, lets rip it up together! Please join us in the first official Trikke group entry. The course is relatively easy to ride and most intermediate riders can finish the course in 2.5 to 3 hours. Take advantage of our great entry offer and we'll sign you up, pay your entry fee, give you a Trikke t-shirt to wear on the ride, and treat you to lunch after the ride. It will be a blast with several of the TV news services (Fox, ESPN, ABC) focusing their cameras on us for feature stories on Trikke. As well, you will be entered in our raffle for a new Trikke T12 Roadster! Go to www.trikke.com/home and click on the LA Marathon banner to get more information about the ride and sign up. As we get closer to the big day, visit our site for more detailed information about group assembly time and location on the morning of the 6th. Sign up now for a great ride with Trikkes designer Gildo Beleski, the Trikke Tech crew and a whole lot of other Trikke fanatics! See you there! This is a one time email. You are not on any mailing list unless you requested.
Trikke at the LA Marathon
For the last four years, Trikke has participated in LA Marathons Acura Bike Tour; now were inviting you to join us. Early Sunday morning March 6th, the LA Marathon course is thick with cyclists for the 23.5 mile fun ride. LA's city streets (Exposition, Venice, Wilshire, Olympic, Fairfax, Vermont, and Figueroa) are all yours to carve and ride on only one beautiful spring morning a year. This year, lets rip it up together! Please join us in the first official Trikke group entry. The course is relatively easy to ride and most intermediate riders can finish the course in 2.5 to 3 hours. Take advantage of our great entry offer and we'll sign you up, pay your entry fee, give you a Trikke t-shirt to wear on the ride, and treat you to lunch after the ride. It will be a blast with several of the TV news services (Fox, ESPN, ABC) focusing their cameras on us for feature stories on Trikke. As well, you will be entered in our raffle for a new Trikke T12 Roadster! Go to www.trikke.com/home and click on the LA Marathon banner to get more information about the ride and sign up. As we get closer to the big day, visit our site for more detailed information about group assembly time and location on the morning of the 6th. Sign up now for a great ride with Trikkes designer Gildo Beleski, the Trikke Tech crew and a whole lot of other Trikke fanatics! See you there! This is a one time email. You are not on any mailing list unless you requested.
Re: Details for svn test repository
Daniel Berlin wrote: On Fri, 2005-02-11 at 17:13 +, Joern RENNECKE wrote: Joseph S. Myers wrote: You mean the revision number of the whole checked out tree, which the "svnversion" utility will tell you in any checked out svn tree (including whether the tree is modified or mixed version). Given such a number, if you don't intend to do svn operations on that tree afterwards you can remove the .svn directories and reconstruct the checked out tree using the version number later. Is there an svn command to do that without doing a new checkout from the repository? You mean tell you what the current version of the repo is? svnlook can do it, but it requires direct access to the repo. I could always make something that just prints out svnlook youngest to a socket and closes the connection (so you could netcat it or whatever), if that is what you need. Alternately, parse the output of svn ls -v on the repository root, and select the highest revision number cited. This requries no changes on the server. signature.asc Description: OpenPGP digital signature
Re: Details for svn test repository
On Fri, 2005-02-11 at 20:25 -0500, Nathanael Nerode wrote: > First of all, I totally approve of moving to Subversion. > > Daniel Berlin wrote: > >I also plan on excluding merge tags > > It's not safe to exclude the most recent mergepoint tag for > a live branch. We will lose necessary information for the next > merge to that branch. > > You wrote elsewhere: > >Find the current revision of the apple-ppc-branch using svn info on your > >checked out copy. > Right, this gives the revision number for the apple-ppc-branch. > > >From your checked out copy of the apple ppc branch, type: > > > >"svn merge -r:HEAD > >svn://svn.toolchain.org/svn/gcc/trunk " > > > >That will merge in all the changes from the HEAD since the last time you > >merged. > > No, it won't. This compares the status of "trunk" between your branch > and HEAD. Is "trunk" on apple-ppc-branch going to contain the > trunk from the last time apple-ppc-branch was merged from trunk? > Why *would* it? (I suppose special procedures used during previous merges > could have had that effect, but that doesn't apply to converted-from-CVS > stuff.) > > Obviously, for a brand-new branch, it would contain the branchpoint, > which is correct. Yes, i was misthinking. You are completely correct. Answering 300 emails means i'm bound to give wrong answers occasionally :) I'll keep the last branchpoint of each branch for the initial import > For a branch which has had a merge from trunk already, > it will *not*. I'm looking at the docs for svn 1.1.3 here. It's plan for some point (and svk is starting to have a good hnadle on it). > > (For a new, all-svn branch, there are easier ways of keeping track of that > revision number, like putting it in the log message for the merge.) Or using svnmerge, which does the same thing using properties. >
Re: Using up double diskspace for working copies (Was Re: Details for svn test repository)
Daniel Berlin wrote: On Fri, 2005-02-11 at 12:08 -0500, Daniel Jacobowitz wrote: On Fri, Feb 11, 2005 at 12:00:26PM -0500, Daniel Berlin wrote: Because if it's a show stopper, then so will be arch, monotone, or any of our other replacements (they all either store the entire repo on your disk, or have stuff in the working copy), and we will be stuck with cvs until everyone is happy to use up double/whatever disk. Actually, having one copy of the entire repository might be cheaper than having several dozen double checkouts. Yes, at some point the double space outruns the cost of the entire repo. For gcc, the cost of the entire repo is 4.4 gig right now. For your case, it might be cheaper to rsync the repo (unlike cvs, for each extra global revision to download, it's going to be 1 new file, and the old files won't be different. So it's going to be a *very fast* rsync), and export directly from that. Since I think this is a very important point, I'm going to contribute a couple of supporting statistics... The CVS repository is about 2.6GB. 3200989 cvsfiles oh, wait, that includes wwwdocs and whatnot, sorry. A complete CVS checkout is 260MB, or about 10% of the repository. If you've just got the one checkout, the checkouts win. I've got a dozen right now; from what I've been hearing, svk would be the biggest win for me. You can't mix svn and svk commits against the same repo. It confuses svk (not svn). You can use svk readonly, of course. Actually, that's not quite right. While svk's depot must only be used by svk, the usual usage is to mirror a regular subversion repository with svk into a svk depot, then work with it from there using svk. Any changes in the svn repository are pulled in with svk sync, and any changes to the mirrored copy are applied to the backing subversion repository. For more information: http://svk.elixus.org/?SVKUsage signature.asc Description: OpenPGP digital signature
Re: Details for svn test repository
First of all, I totally approve of moving to Subversion. Daniel Berlin wrote: >I also plan on excluding merge tags It's not safe to exclude the most recent mergepoint tag for a live branch. We will lose necessary information for the next merge to that branch. You wrote elsewhere: >Find the current revision of the apple-ppc-branch using svn info on your >checked out copy. Right, this gives the revision number for the apple-ppc-branch. >From your checked out copy of the apple ppc branch, type: > >"svn merge -r:HEAD >svn://svn.toolchain.org/svn/gcc/trunk " > >That will merge in all the changes from the HEAD since the last time you >merged. No, it won't. This compares the status of "trunk" between your branch and HEAD. Is "trunk" on apple-ppc-branch going to contain the trunk from the last time apple-ppc-branch was merged from trunk? Why *would* it? (I suppose special procedures used during previous merges could have had that effect, but that doesn't apply to converted-from-CVS stuff.) Obviously, for a brand-new branch, it would contain the branchpoint, which is correct. For a branch which has had a merge from trunk already, it will *not*. I'm looking at the docs for svn 1.1.3 here. >From the Subversion book: "Ideally, your version control system should prevent the double-application of changes to a branch. It should automatically remember which changes a branch has already received, and be able to list them for you. It should use this information to help automate merges as much as possible. "Unfortunately, Subversion is not such a system. Like CVS, Subversion does not yet record any information about merge operations. When you commit local modifications, the repository has no idea whether those changes came from running svn merge, or from just hand-editing the files." This is especially true of old merges being ported over from CVS. In order to merge correctly, we need to know the last repository revision number on trunk which was merged into the branch. This means, in the case of an old merge done in CVS, the revision number corresponding to the last mergepoint. (For a new, all-svn branch, there are easier ways of keeping track of that revision number, like putting it in the log message for the merge.) -- This space intentionally left blank.
Re: Details for svn test repository
On Fri, 2005-02-11 at 20:29 -0500, Daniel Berlin wrote: > On Fri, 2005-02-11 at 20:25 -0500, Nathanael Nerode wrote: > > (For a new, all-svn branch, there are easier ways of keeping track of that > > revision number, like putting it in the log message for the merge.) > > Or using svnmerge, which does the same thing using properties. Maybe the merge tags can be translated in the conversion to svnmerge's properties? Then we can just all use svnmerge and be happy. zw
Re: Details for svn test repository
On Fri, 2005-02-11 at 17:38 -0800, Zack Weinberg wrote: > On Fri, 2005-02-11 at 20:29 -0500, Daniel Berlin wrote: > > On Fri, 2005-02-11 at 20:25 -0500, Nathanael Nerode wrote: > > > (For a new, all-svn branch, there are easier ways of keeping track of that > > > revision number, like putting it in the log message for the merge.) > > > > Or using svnmerge, which does the same thing using properties. > > Maybe the merge tags can be translated in the conversion to svnmerge's > properties? Then we can just all use svnmerge and be happy. This is almost possible. The problem is that you need to know what other revisions were applied to the branch already, which is hard to calculate. :) Thus, it's better to have people svnmerge init the branches for their own branches if they want, since it lets you record what revisions were already merged at init time through one of it's options. Or you can solve the problem of trying to figure out which revisions we already applied, and i'll happily throw the code in cvs2svn.
Re: 3.4.3 C++ parsing bug?
On Fri, Feb 11, 2005 at 11:37:52PM +0100, Karel Gardas wrote: > On Fri, 11 Feb 2005, Karel Gardas wrote: > > > On Fri, 11 Feb 2005, Jan Reimers wrote: > > > > > Can someone verify that this is valid C++ before I submit a bug report: > > > > > > // test.C > > > template class A {static T* c;}; > > > > > > class B : public A {}; > > > > > > B* A::c=0; > > > // end test.C > > > > > > > At least Comeau C++ 4.3.3 and Intel C++ 8.0 compile it and to me it also > > looks ok, but I'm not at all C++ language lawer! > > Thanks to Joe Buck's note, I've found that I've compiled the code with > default, i.e. not so ANSI C++ strict options. I can just confirm that both > Comeau and Intel also fail to compile the code above with proper more > strict options (-A for como, -ansi for icpc). I wouldn't mind at all if gcc had a clearer error message, though, even to the point of suggesting "template <>".
gcc-3.4-20050211 is now available
Snapshot gcc-3.4-20050211 is now available on ftp://gcc.gnu.org/pub/gcc/snapshots/3.4-20050211/ and on various mirrors, see http://gcc.gnu.org/mirrors.html for details. This snapshot has been generated from the GCC 3.4 CVS branch with the following options: -rgcc-ss-3_4-20050211 You'll find: gcc-3.4-20050211.tar.bz2 Complete GCC (includes all of below) gcc-core-3.4-20050211.tar.bz2 C front end and core compiler gcc-ada-3.4-20050211.tar.bz2 Ada front end and runtime gcc-g++-3.4-20050211.tar.bz2 C++ front end and runtime gcc-g77-3.4-20050211.tar.bz2 Fortran 77 front end and runtime gcc-java-3.4-20050211.tar.bz2 Java front end and runtime gcc-objc-3.4-20050211.tar.bz2 Objective-C front end and runtime gcc-testsuite-3.4-20050211.tar.bz2The GCC testsuite Diffs from 3.4-20050128 are available in the diffs/ subdirectory. When a particular snapshot is ready for public consumption the LATEST-3.4 link is updated and a message is sent to the gcc list. Please do not use a snapshot before it has been announced that way.
Re: GCC 3.3.5: -march=i586 does not use all pentium FPU instructions
Peter Soetens wrote: > I was wondering why the above gcc parameter does not enable the use > of the fst/fld opcodes for pentium processors, while -march=i686 > does. The Intel manuals specifically say that they can be used across > all pentium processors. There are 2 options to tell the compiler about your wanted processor. The -march=xyz option tells you the instruction set to use, while the -mcpu=xyz option tells you for which processor the program should run fastest. If you supply the -march option, but not the -mcpu option, then the compiler will assume you use the same processor for both. The difference in the code you see are actually due to the -mcpu option. For your first code example, you implicitly use -mcpu=586 and for the second example, you use -mcpu=686. So your first code is supposed to run fastest on a Pentium class processor while your second code is supposed to run fastest on a Pentium2 class processor. Now, an a Pentium processor, the FLD and FST instructions are (relatively) expensive. So the compiler decides it is faster to do load/store operations using integer registers. On Pentium2 class processors, the FLD and FST instructions are much faster, and now the compiler considers it worthwhile to use them. Now if you want to generate code that will be guaranteed to run on Pentium processors, but runs best on Pentium2 class processors, you have to use both the options -march=pentium and -mcpu=pentium2 (you can also use 586 and 686 which are aliases, but I would recommend you to use real processor names) Of course, as Pentium2 processors are not so common any more either, you can also tune your code for Pentium4 using -mcpu=pentium4, or for AMD Athlon processors using -mcpu=athlon or some specific athlon model. Note that on newer versions of GCC (starting with 3.4.0), the -mcpu option has been deprecated and replaced by the -mtune option to be consistent with other processor architectures supported by GCC. -- Marcel Cox (using XanaNews 1.17.2.4)
adding new instruction
hi, i like to add a new instruction based on thumb ISA. i have added the instruction in both as and gcc. both of them are working correctly. but when i call ld it shows an error like /home/.../arm-elf-ld : /home/../arm-elf/lib/libc.a(printf.o)(printf): warning : interworking not enabled first occurance : /tmp/cc00zhyh.o : thumb call to arm /tmp//cc00zhyh.o(.text+0x4e>: In function 'main' new.c:internal error: dangerous error whether i have to change anything in the ld. i have searched for the ld source file but i counldnt get one in the ld folder. which file has to modified first and what kind of changes are needed. thanks in advace -- __ Check out the latest SMS services @ http://www.linuxmail.org This allows you to send and receive SMS through your mailbox. Powered by Outblaze
Re: Moving to an alternate VCS
Joe Buck <[EMAIL PROTECTED]> writes: > > It might make the most sense to go the auto-generation route, and then > the standard for checkin comments would be to use the ChangeLog format. > The ChangeLog can then be generated by just appending the entries > together, and tacking the "legacy ChangeLog" onto the end. In addition: It would be nice if checkin comments would contain a link to the patch submission in gcc-patches. Often the patch submission contains a high level description and a lot of useful background information that is not always obvious from the patch. Usually it can be found by searching the archives, but it would be much nicer to have a direct link. -Andi
regression in ra ?
Hi I've found small issue in ra probably. Maybe there's bug filled out for it already, but I can't find it. For simple loop like that: for( unsigned int i=0;ihttp://viewcvs.pointblue.com.pl/index.cgi/*checkout*/gj/neurony/neuron.cpp lines 43-45. Thanks. -- Vercetti
Re: regression in ra ?
On Saturday 12 February 2005 13:23, Tommy Vercetti wrote: > Hi > > I've found small issue in ra probably. Maybe there's bug filled out for it > already, but I can't find it. With what you've reported here, we can't help you. Please read http://gcc.gnu.org/bugs.html, "Reporting Bugs", and file a bug report. Gr. Steven
Re: regression in ra ?
On Saturday 12 February 2005 14:13, you wrote: > On Saturday 12 February 2005 13:23, Tommy Vercetti wrote: > > Hi > > > > I've found small issue in ra probably. Maybe there's bug filled out for > > it already, but I can't find it. > > With what you've reported here, we can't help you. > Please read http://gcc.gnu.org/bugs.html, "Reporting Bugs", and file > a bug report. Don't have to CC me, I'm on the list. it's today's gcc: gcc-4.0 (GCC) 4.0.0 20050212 (experimental) I've attached link to source, in the same dir there's Makefile. It was compiled with: g++-4.0 -pedantic --save-temps -ftree-vectorize -O3 -Wall -mtune=pentium3 on p3. -- Vercetti
Re: [RFC] fold Reorganization Plan
Kazu Hirata wrote: I am planning to reorganize fold as suggested by Roger Sayle on IRC. good for you! reorganizing fold is an excellent idea. The shortest way to describe this mini project would be to develop the tree equivalent of simplify_gen_ARITY and simplify_ARITY in the RTL world. Doing so should reduce the number of scratch tree nodes created when idioms like fold (buildN (...)) are used. Hopefully, we should be able to pick up some compile-time improvement along the way. By the way, the past proposals from Roger Sayle are found at: http://gcc.gnu.org/ml/gcc-patches/2003-10/msg01514.html http://gcc.gnu.org/ml/gcc/2004-01/msg00560.html Both of these are along the same lines as above. Any comments? I question if it is better to fold early. As I've said before, I think the optimizations that fold performs should be turned into a proper SSA optimization phase% -- that can be repeatedly applied as necessary. In the front end, folding should not generally be done. I see two needs for early folding, 1) evaluation of constant expressions, where the language requires a compile time constant value. Such values should be calculated lazily by some language specific function (possibly using common backend functions). This would produce an error or an appropriate _CST node. 2) A common occurrence is such things as fold (build (PLUS_EXPR, a, b)), where it is known that a and b are _CST nodes. This kind of thing happens in array initialization and vtable construction, and probably several other bookkeeping tasks. Here the operation is virtually always one of ADD, SUB, MUL and the types are integral types. Here we should have an interface that ONLY deals with those three operations and ONLY deals with already-coerced integral types. Whether we have separate eval_integral_OP (a, b) functions or eval_integral (OP, a, b), I don't know. nathan %-I guess gimplifying unfolded expressions might lead to an explosion of statements. It might be prudent to apply folding during the gimplification phase as well. If the SSA folding pass has the correct interface, I'd hope it can be applied to gimple statements and pre-gimplified trees. -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: regression in ra ?
On Saturday 12 February 2005 13:23, Tommy Vercetti wrote: > Hi > > I've found small issue in ra probably. Maybe there's bug filled out for it > already, but I can't find it. > > For simple loop like that: > > for( unsigned int i=0;i wagi[i] = 0; > } and on ultrasparc it works fine: nop ld [%i0+4], %i0 mov 0, %g1 .LL59: add %g1, 1, %g1 st %g0, [%i0] st %g0, [%i0+4] cmp %i1, %g1 bne .LL59 Althou gcc4 there is bit older: gcc-4.0 (GCC) 4.0.0 20050123 (experimental) I am currently in the middle of updating it, so will see if there is difference afterwards. -- GJ
Re: [RFC] fold Reorganization Plan
Nathan Sidwell writes: > > I question if it is better to fold early. As I've said before, I think > the optimizations that fold performs should be turned into a proper SSA > optimization phase% -- that can be repeatedly applied as necessary. In the > front end, folding should not generally be done. There have been several occasions on which I've fixed bugs caused by the fact that folding that is legal in C and C++ has not been legal in Java. It might be that moving folding away from the front end will be good, but we'd still need support for initialized constant decls. Andrew
Re: [RFC] fold Reorganization Plan
Hi Nathan, > I question if it is better to fold early. As I've said before, I think > the optimizations that fold performs should be turned into a proper SSA > optimization phase% -- that can be repeatedly applied as necessary. In the > front end, folding should not generally be done. I see two needs for > early folding, I may not be quite answering your question, but I have a comment. Maybe we can have an early fold and a general fold. The former would handle constant expressions for front ends. The latter is a full fledged version but optimized to handle GIMPLE statements. The reasons to optimize the full fledged version to handle GIMPLE statements include 1) We can remove parts of fold handling those cases that never occur in the GIMPLE world. 2) Currently, fold has so many transformations that look into a heavily nested tree, but all of those are useless in the GIMPLE world unless one builds scratch node and passes it to fold. An example of such a transformation would be: (A * C) + (B * C) -> (A + B) * C We would express this in GIMPLE as D = A * C; E = B * C; F = D + E; Given D + E, we can instead have fold chase SSA_NAME_DEF_STMT of D and E so that the above transformation is performed. Whether we want to always chase SSA_NAME_DEF_STMT is another question. Richard Henderson once suggested putting a hook for chasing. In a tree combiner, we may want to limit SSA_NAME_DEF_STMT chasing to the case where the SSA_NAME is used only once. In other situations, we might want to have a more relaxed hook although I cannot come up with a specific example. Kazu Hirata
Re: [RFC] fold Reorganization Plan
Kazu, Maybe we can have an early fold and a general fold. The former would handle constant expressions for front ends. The latter is a full fledged version but optimized to handle GIMPLE statements. hm, we may be in violent agreement :) It depends what you mean by 'early fold'. You say it would handle constant expressions for FEs -- isn't that the same as what I described as a constant expression evaluator? After all, if it is just for constant exprs, it is required to 'fold' to a _CST node -- or give an error. If I've misunderstood, could you clarify? The reasons to optimize the full fledged version to handle GIMPLE statements include 1) We can remove parts of fold handling those cases that never occur in the GIMPLE world. Do you have examples of what this would be? I have no feeling for what might be foldable in GIMPLE. I'm curious (I don't think it'll affect the discussion though). 2) Currently, fold has so many transformations that look into a heavily nested tree, but all of those are useless in the GIMPLE world unless one builds scratch node and passes it to fold. An example of such a transformation would be: (A * C) + (B * C) -> (A + B) * C hm, there's really two kinds of thing that need to happen, 1) the kind of reassociation & strength reduction that you describe 2) folding of constant expressions, for when we discover that some of A, B and C are constants. I don't know whether these operations should be part of the same SSA optimization or not. #2 is more of a constant propagation kind of thing I guess. #1 is the kind of thing that has made const-fold so complicated. #1 is the important thing to add to the SSA optimizers, isn't it? nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: [RFC] fold Reorganization Plan
On Sat, 12 Feb 2005, Nathan Sidwell wrote: > I question if it is better to fold early. As I've said before, I think > the optimizations that fold performs should be turned into a proper SSA > optimization phase% -- that can be repeatedly applied as necessary. As for a proper tree-ssa optimization pass, I believe Andrew Pinski's proposed tree-combiner pass which attempts to merge/combine consecutive tree statements and check whether the result can be folded, would fit your description. However, the utility of early fold to the GCC compiler is much greater than simply compile-time evaluating expressions with constant operands. One of the reasons that fold is so fast, is that it can rely on the fact that all of a trees operands have already been folded. In fact, much of the middle-end makes or can make use of the assumptions that constant operands to binary operators appear second, that a NOP_EXPRs only appear where needed, that NEGATE_EXPR never wraps another NEGATE_EXPR and that "x+x" and "2*x" are represented the same way. For example, left to it's own devices a front-end would produce a potentially unbounded tree for "!!...x". Whilst, it might be desirable to reproduce such ugliness in source code analyis tools, the fact that GCC currently only has to work minimal trees (compact by construction) both improves compile-time and memory usage. Another example is the some front-ends (cough, g++, cough) love calling save_expr at any opportunity, even on constant integers and string literals. If it wasn't for the middle-end/fold only creating such tree nodes as necessary, many analyses/transformations, would be significantly complicated. As several front-end people have suggested, calling fold whilst constructing parse trees shouldn't be necessary (as shown by the shining examples of g77 and GNAT). In reality, many of the transformations performed by fold (most? as I expect expressions with constant operands are actually fairly rare at the source level) are purely to tidy up the inefficiencies or incorrect tree representations constructed by the front-ends. If it isn't clear from the name fold_buildN, these interfaces are designed to apply the invariants expected when building trees. In the further flung future, the buildN interfaces may become deprecated and even further still trees might be marked read-only (i.e. const) outside of the middle-end (allowing static trees and more middle-end controlled tree sharing). Bwa-ha-ha-ha-ha (evil laugh)! Roger --
Re: regression in ra ?
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=19922 -- Vercetti
Re: [RFC] fold Reorganization Plan
On Sat, 12 Feb 2005 06:33:42 -0700 (MST), Roger Sayle <[EMAIL PROTECTED]> wrote: > > On Sat, 12 Feb 2005, Nathan Sidwell wrote: > > I question if it is better to fold early. As I've said before, I think > > the optimizations that fold performs should be turned into a proper SSA > > optimization phase% -- that can be repeatedly applied as necessary. > > As for a proper tree-ssa optimization pass, I believe Andrew Pinski's > proposed tree-combiner pass which attempts to merge/combine consecutive > tree statements and check whether the result can be folded, would fit > your description. > > However, the utility of early fold to the GCC compiler is much greater > than simply compile-time evaluating expressions with constant operands. > One of the reasons that fold is so fast, is that it can rely on the fact > that all of a trees operands have already been folded. In fact, much > of the middle-end makes or can make use of the assumptions that constant > operands to binary operators appear second, that a NOP_EXPRs only appear > where needed, that NEGATE_EXPR never wraps another NEGATE_EXPR and that > "x+x" and "2*x" are represented the same way. And, for example, one problem I'm facing continuously is the lack of a canonical way of representing array accesses - both the C and C++ frontend emit different initial trees for that. And even the (maybe nonexistant) semantic differences of the middle-end ARRAY_REF vs. INDIRECT_REF/ADDR_EXPR is not clear to me. Richard.
Re: [RFC] fold Reorganization Plan
> I don't know whether these operations should be part of the same SSA > optimization or not. #2 is more of a constant propagation kind of > thing I guess. #1 is the kind of thing that has made const-fold so > complicated. #1 is the important thing to add to the SSA optimizers, > isn't it? Yes. In fact, i'm working on making GVN-PRE do some reassociation as part of it's value numbering in 4.1, so that we can detect that given a = b + c d = a + e and a = b + e d = a + c that d has the same value in both cases. This is more or less done through value expression chasing, rather than SSA_NAME_DEF_STMT chasing (because we already know the "value expression" for b + e, which is something like ValueNumber34 + ValueNumber65) > > nathan
Re: [RFC] fold Reorganization Plan
Hi Nathan, > hm, we may be in violent agreement :) It depends what you mean > by 'early fold'. You say it would handle constant expressions for FEs > -- isn't that the same as what I described as a constant expression > evaluator? Yes. > After all, if it is just for constant exprs, it is required > to 'fold' to a _CST node -- or give an error. If I've misunderstood, > could you clarify? Can a compile-time constant appearing in an initializer be as wild as the following? 0 ? (foo () + 9) : (3 + 5) Here (foo () + 9) does not fold to a constant, but the whole expression does fold to 8. > > The reasons to optimize the full fledged version to handle GIMPLE > > statements include > > > > 1) We can remove parts of fold handling those cases that never occur > >in the GIMPLE world. > Do you have examples of what this would be? Certainly, a GIMPLE-specific fold wouldn't need to handle TRUTH_ANDIF_EXPR. I cannot come up with a better example right now. TRUTH_ANDIF_EXPR wouldn't be a good example because we are just removing one case of the big switch statement in fold, which is unlikyly to give any measurable speed-up. > I have no feeling for what might be foldable in GIMPLE. I'm curious > (I don't think it'll affect the discussion though). x & x -> x ? :-) > hm, there's really two kinds of thing that need to happen, > 1) the kind of reassociation & strength reduction that you describe > 2) folding of constant expressions, for when we discover that some of > A, B and C are constants. > > I don't know whether these operations should be part of the same SSA > optimization or not. #2 is more of a constant propagation kind of > thing I guess. #1 is the kind of thing that has made const-fold so > complicated. #1 is the important thing to add to the SSA optimizers, > isn't it? Yes. Some transformations that happen in fold would involve CFG manipulation in the GIMPLE world. Those transformations include TRUTH_{AND,OR}IF_EXPR, a lot of COND_EXPR manipulations, etc. If these are good transformations, we need to move them to the SSA optimizers as you mentioned.. Kazu Hirata
Re: [RFC] fold Reorganization Plan
Kazu, Can a compile-time constant appearing in an initializer be as wild as the following? 0 ? (foo () + 9) : (3 + 5) Here (foo () + 9) does not fold to a constant, but the whole expression does fold to 8. Well, it depends on the FE's language definition :) For C and C++ the above is not a constant-expression as the language defines it. I can see a couple of obvious ways to deal with this with an FE specific constant expression evaluator, 1) during parsing set a flag if the expression contains something not permitted for a constant-expresssion 2) a lazy folder returns 'error' when it meets something not allowed (and if ?: is allowed, it must go down each of its branches to determine if they have something banned). Anyhow, a backend strength-reducer would not be so constrained. (hey, can we start using the terms 'strength reducer' and 'constant evaluator' to distinguish the two uses that are currently tangled up inside fold.) Does that answer your question? Thanks for your explanation of gimple folding etc, it was most informative. nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: GCC 4.0 Status Report (2005-02-03)
> * Project Title I. SMS (Modulo Scheduling) Improvements. > > * Project Contributors Mostafa Hagog > > * Dependencies No dependencies. > > * Delivery Date Ready, currenly committed to the autovect-branch. > > * Description > > Describe the project *in detail*. > > What will you be doing? 1. Make the tree loop-versioning usable in RTL cfg-layout mode by making it a cfg hook. 2. Move SMS to use cfg-layout mode. 3. Use loop information to detect simple loops. 4. Replace the loop versioning in SMS by using the RTL loop-versioning. 5. Several other improvements: 1. Check if the SMSed loop kernel is more efficient in means of number of cycles if not undo the changes. We do this by feeding the loop kernel into the DFA and counting the number of cycles before and after SMS - if we didn't improve (there is a chance because of the register copies we add) we prefer the original loop. a. Ignore register anti-depences - use register copies instead. b. Add backtracking to the scheduling algorithm; When failing to find a cyclen within a kernel of II cycles for a given node we used to restart the whole process with kernel of II + 1. Now we try to un-schedule some of the nodes that the one we failed on depends on and schedule the failing node first, then try the other nodes. > > What parts of the compiler will be modified? modulo-sched.c, ddg.c > > What will the benefits of your change be? (Be specific -- for > example, if you will be improving code quality, indicate what kind > of code, and, if possible, how great the improvement will be.) 1. More loops will be applicable to SMS. 2. SMS is undone when it is not profitable, so we don't increase code size in cases we don't gain much. 3. Better schedules - due to backtracking. > > What risks do you foresee? (If you say "none", you'll be asked to > resubmit...) What will you be doing to mitigate those risks? The only risk is to affect the tree level loop versioning and the modulo-scheduling. > > I'll synthesize this information and make decisions about schedules > for 4.1 once I have the proposals in hand. > > To that end, please submit your proposal(s) by February 17th. Late > proposals will still be considered. In fact, there's nothing to > prevent us from including work conceived developed later in the cycle, > if that's appropriate. So, the February 17th date is not a drop-dead > deadlin for 4.1 material in any way. > > But, I would like to get a sense of what projects are already out > there, so that we can start bringing them into GCC 4.1. By staging > the integration, we can take some time to stabilize after each major > contribution. > > I will create the GCC 4.0 branch on February 24th, after posting > information about initial GCC 4.1 development, based on the proposals > I have received by the 17th. Based on current progress, my > expectation is that the branch will be in very good shape by then. > > Therefore, my current expectation for a GCC 4.0 release date is April 15th. > > -- > Mark Mitchell > CodeSourcery, LLC > [EMAIL PROTECTED]
Re: GCC 4.0 Status Report (2005-02-03)
Please discard the previous message it was send by mistake. [EMAIL PROTECTED] wrote on 12/02/2005 20:34:57: > > > > > > * Project Title > I. SMS (Modulo Scheduling) Improvements. > > > > > * Project Contributors > Mostafa Hagog > > > > > * Dependencies > No dependencies. > > > > > * Delivery Date > > Ready, currenly committed to the autovect-branch. > > > > > * Description > > > > Describe the project *in detail*. > > > > What will you be doing? > > 1. Make the tree loop-versioning usable in RTL cfg-layout mode by making it >a cfg hook. > 2. Move SMS to use cfg-layout mode. > 3. Use loop information to detect simple loops. > 4. Replace the loop versioning in SMS by using the RTL loop-versioning. > 5. Several other improvements: >1. Check if the SMSed loop kernel is more efficient in means of > number of > cycles if not undo the changes. We do this by feeding the loop kernel > into > the DFA and counting the number of cycles before and after >SMS - if we didn't improve (there is a chance because of the register >copies we add) we prefer the original loop. > > a. Ignore register anti-depences - use register copies instead. > b. Add backtracking to the scheduling algorithm; > When failing to find a cyclen within a kernel of II cycles for a > given node we used to restart the whole process with kernel of II + 1. > Now we try to un-schedule some of the nodes that the one we failed on > depends on and schedule the failing node first, then try the other > nodes. > > > > > What parts of the compiler will be modified? > > modulo-sched.c, ddg.c > > > > > What will the benefits of your change be? (Be specific -- for > > example, if you will be improving code quality, indicate what kind > > of code, and, if possible, how great the improvement will be.) > 1. More loops will be applicable to SMS. > 2. SMS is undone when it is not profitable, so we don't increase code >size in cases we don't gain much. > 3. Better schedules - due to backtracking. > > > > > What risks do you foresee? (If you say "none", you'll be asked to > > resubmit...) What will you be doing to mitigate those risks? > > The only risk is to affect the tree level loop versioning and > the modulo-scheduling. > > > > > I'll synthesize this information and make decisions about schedules > > for 4.1 once I have the proposals in hand. > > > > To that end, please submit your proposal(s) by February 17th. Late > > proposals will still be considered. In fact, there's nothing to > > prevent us from including work conceived developed later in the cycle, > > if that's appropriate. So, the February 17th date is not a drop-dead > > deadlin for 4.1 material in any way. > > > > But, I would like to get a sense of what projects are already out > > there, so that we can start bringing them into GCC 4.1. By staging > > the integration, we can take some time to stabilize after each major > > contribution. > > > > I will create the GCC 4.0 branch on February 24th, after posting > > information about initial GCC 4.1 development, based on the proposals > > I have received by the 17th. Based on current progress, my > > expectation is that the branch will be in very good shape by then. > > > > Therefore, my current expectation for a GCC 4.0 release date is April > 15th. > > > > -- > > Mark Mitchell > > CodeSourcery, LLC > > [EMAIL PROTECTED] >
Re: Using up double diskspace for working copies (Was Re: Details for svn test repository)
> Right - using svn programs to directly modify the svk depot (which is it's > local 'repository'), is touchy. You *can* do it, but you have to be quite > careful about the svk:* properties used to track merges and mirrors. > Generally there's no need, other than perhaps using a read-only client to > make your local work visible to others prior to pushing it upstream. > > However, none of this doesn't mean you can't use svk as your local client > with a normal svn repository mirrored into the depot. In fact, it's > probably the most common use of svk. This is *not* a readonly setup, as svk > knows how to push commits through and back to upstream. Obviously this can > entail conflict resolution if your local mirror has become a local branch, > since svk also allows you to make commits that you haven't pushed back up > yet, but that's not really different than still having it all loose WC > without a local VCS. Oh, okay, i get it now. :) Thanks for all the clarficiation
Re: [RFC] fold Reorganization Plan
Roger, However, the utility of early fold to the GCC compiler is much greater than simply compile-time evaluating expressions with constant operands. One of the reasons that fold is so fast, is that it can rely on the fact that all of a trees operands have already been folded. In fact, much of the middle-end makes or can make use of the assumptions that constant operands to binary operators appear second, that a NOP_EXPRs only appear where needed, that NEGATE_EXPR never wraps another NEGATE_EXPR and that "x+x" and "2*x" are represented the same way. I did not realize fold was non-recursive -- I thought it was, but maybe I was hallucinating when I saw that :) Canonicalizing for the middle end could, of course, happen during the gimplification stage. For example, left to it's own devices a front-end would produce a potentially unbounded tree for "!!...x". Whilst, it might be desirable to reproduce such ugliness in source code analyis tools, the fact that GCC currently only has to work minimal trees (compact by construction) both improves compile-time and memory usage. The current implementation does not have those properties -- because each one of those !'s will have created a ! node, which is then folded. You're correct that a fold_buildN interface would save memory by not creating the to-be-folded node in the first place. However, we've identified that early folding is getting in the way of language conformance (Andrew Haley's java comments, there are some icky cases with C, C++, floats and casts). It would be a pity not to come up with a design that would allow those to be done correctly. I wonder how much such non-reassociation/strength reduction folding happens in a real program? Do we have some numbers on that? Another example is the some front-ends (cough, g++, cough) love calling save_expr at any opportunity, even on constant integers and string literals. If it wasn't for the middle-end/fold only creating such tree nodes as necessary, many analyses/transformations, would be significantly complicated. I'm don't understand your point here. save_expr only creates SAVE_EXPRs on TREE_SIDE_EFFECTed nodes, doesn't it? Part of the problem with them is the FE's love of lowering some FE construct to middle-end representation early on, and of necessity refering to some operand more than one. An example would be a c++ convert-to-base, which needs to generate something like 'op ? op+base_offset : NULL' -- if the middle end used an FE specific BASE_CONV node, the creation of a SAVE_EXPR could be delayed until gimplification -- when constant evaluation/strength reduction can be applied concurrently. As several front-end people have suggested, calling fold whilst constructing parse trees shouldn't be necessary (as shown by the shining examples of g77 and GNAT). In reality, many of the transformations performed by fold (most? as I expect expressions with constant operands are actually fairly rare at the source level) are purely to tidy up the inefficiencies or incorrect tree representations constructed by the front-ends. Er, should we really have a 'cleanup incorrect gunk' mechanism (much like how reload fixes up all that dodgy RTL backends produce). If it isn't clear from the name fold_buildN, these interfaces are designed to apply the invariants expected when building trees. In No it wasn't clear. I thought these would be doing more than making canonical trees. I thought they would be doing the kind of reassociation and strength reduction optimization that fold-const currently does. Are they to do that as well? If they are just to make canonical trees, can we have a less confusing name? the further flung future, the buildN interfaces may become deprecated and even further still trees might be marked read-only (i.e. const) outside of the middle-end (allowing static trees and more middle-end controlled tree sharing). Bwa-ha-ha-ha-ha (evil laugh)! evil grin indeedy :) nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: [RFC] fold Reorganization Plan
On Feb 12, 2005, at 12:57, Nathan Sidwell wrote: Well, it depends on the FE's language definition :) For C and C++ the above is not a constant-expression as the language defines it. I can see a couple of obvious ways to deal with this with an FE specific constant expression evaluator, 1) during parsing set a flag if the expression contains something not permitted for a constant-expresssion 2) a lazy folder returns 'error' when it meets something not allowed (and if ?: is allowed, it must go down each of its branches to determine if they have something banned). Front ends should be responsible for doing any constant folding that their language definition requires. Otherwise, you'd get the strange situation that legality of a program depends on the strength of the optimizers, compilation flags used or even target properties. Your proposal to have the tree folders check wether the program obeys C/C++ languages semantics seems fundamentally flowed. GCC's middle- and back end should not be required to do anything for a function that it has determined will never be called. Wether an expression is constant for the middle end may depend on many factors, including wether a certain function call could be expanded inline or not. Constant folding as required by language standards has a very precise definition, and does not depend on compilation options or optimization parameters. When the FE hands of a tree to the middle end, it asserts that the program conforms to the static semantics of the programming language. This gives the optimizers the freedom to do any transformations, as long as it conforms to the language-independent definition of GIMPLE. -Geert
Re: [RFC] fold Reorganization Plan
Geert Bosch wrote: Front ends should be responsible for doing any constant folding that their language definition requires. Otherwise, you'd get the strange situation that legality of a program depends on the strength of the optimizers, compilation flags used or even target properties. I entirely agree. Unfortunately what we have now is not that -- fold is doing both optimization and (some) C & C++ semantic stuff. Your proposal to have the tree folders check wether the program obeys C/C++ languages semantics seems fundamentally flowed. That is not my proposal. I'm sorry if I gave the impression it was, but it isn't. (What I meant by a tree folders in that regard was an FE-specific folder.) Constant folding as required by language standards has a very precise definition, and does not depend on compilation options or optimization parameters. When the FE hands of a tree to the middle end, it asserts that the program conforms to the static semantics of the programming language. This gives the optimizers the freedom to do any transformations, as long as it conforms to the language-independent definition of GIMPLE. Yup, you're reiterating my position :) nathan -- Nathan Sidwell:: http://www.codesourcery.com :: CodeSourcery LLC [EMAIL PROTECTED]:: http://www.planetfall.pwp.blueyonder.co.uk
Re: [RFC] fold Reorganization Plan
On Feb 12, 2005, at 14:57, Nathan Sidwell wrote: I entirely agree. Unfortunately what we have now is not that -- fold is doing both optimization and (some) C & C++ semantic stuff. Your proposal to have the tree folders check wether the program obeys C/C++ languages semantics seems fundamentally flowed. OK, we're in violent agreement then! :) I misunderstood your message. That is not my proposal. I'm sorry if I gave the impression it was, but it isn't. (What I meant by a tree folders in that regard was an FE-specific folder.) OK, then things make a lot more sense!
Re: Moving to an alternate VCS
Joe Buck <[EMAIL PROTECTED]> writes: [...] | It might make the most sense to go the auto-generation route, and then ChangeLogs entries, when properly done (by people like RTH or Roger Sayle), carry highly valuable information about what the purpose of a change-set is; not just the code. I'm of the opinion that that source of information should be preserved, when and if we switch to another VCS. -- Gaby
Re: [RFC] fold Reorganization Plan
As several front-end people have suggested, calling fold whilst constructing parse trees shouldn't be necessary (as shown by the shining examples of g77 and GNAT). I don't follow. GNAT certainly calls fold for every expression it makes. In reality, many of the transformations performed by fold (most? as I expect expressions with constant operands are actually fairly rare at the source level) are purely to tidy up the inefficiencies or incorrect tree representations constructed by the front-ends. I disagree. It's far better to have common code to do simplifications than to have each front end have their own set. I'm not sure I understand your point here.
Re: [RFC] fold Reorganization Plan
On Feb 12, 2005, at 15:58, Richard Kenner wrote: As several front-end people have suggested, calling fold whilst constructing parse trees shouldn't be necessary (as shown by the shining examples of g77 and GNAT). I don't follow. GNAT certainly calls fold for every expression it makes. But GNAT doesn't rely on GCC for constant folding static expressions, or even call the back end before semantic analysis has finished! This is what we're talking about. -Geert
RE: Global Reload Problem
Thanks for you help. I will look at some of the changes you suggested. Gyle -Original Message- From: James E Wilson [mailto:[EMAIL PROTECTED] Sent: Wednesday, February 09, 2005 2:00 PM To: Gyle Yearsley Cc: gcc@gcc.gnu.org Subject: RE: Global Reload Problem On Thu, 2005-02-03 at 10:22, Gyle Yearsley wrote: > I believe the seg fault is happening in the second TEST_HARD_REG_BIT since > the regno(0)+n(-1) for regno = 0 is -1. The HARD_REGNO_MODE_OK is a c > function and not a macro. "i << -1" is undefined, and for some hosts, may result in a seg fault. Similarly, if HARD_REG_SET is an array, then indexing into it with an offset of -1 gives an invalid out-of-bounds access, which can fail in various ways. There is a known problem with TEST_HARD_REG_BIT in that we can get failures (out-of-bounds access, undefined shift) if regno+n is larger than the number of hard registers. I don't recall seeing a similar bug reported when regno+n < 0. See PR 12754. I'd suggest filing a bug report for this problem, and making it depend on PR 12754. Fixing this is likely to be complicated, as these macros are used in lots of different places in a number of different ways. You might be able to workaround the problem by saying that register 0 can't hold an HI-mode value, to try to avoid creating the HImode subreg in the first place. It isn't obvious whether that helps though. You might need to reorder tests to put HARD_REGNO_MODE_OK checks before TEST_HARD_REG_BIT checks. For an additional laugh, check out the for loop in find_valid_class for (regno = 0; regno < FIRST_PSEUDO_REGISTER - n && ! bad; regno++) and consider what happens when n is negative. Oops. I wonder how places there are that do this... -- Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com
Re: Moving to an alternate VCS
On Sat, Feb 12, 2005 at 09:30:38PM +0100, Gabriel Dos Reis wrote: > Joe Buck <[EMAIL PROTECTED]> writes: > > [...] > > | It might make the most sense to go the auto-generation route, and then > > ChangeLogs entries, when properly done (by people like RTH or Roger > Sayle), carry highly valuable information about what the purpose of a > change-set is; not just the code. I'm of the opinion that that source > of information should be preserved, when and if we switch to another > VCS. I agree; the point is that the ChangeLog entry and the SVN checkin comment that describes the revision can be one and the same (which would mean that all information that normally appears in a ChangeLog is present in the svn checkin comment, in the standard format).
Re: adding new instruction
"aram bharathi" <[EMAIL PROTECTED]> writes: > i like to add a new instruction based on thumb ISA. i have added the > instruction in both as and gcc. both of them are working correctly. but when > i call ld it shows an error like > > /home/.../arm-elf-ld : /home/../arm-elf/lib/libc.a(printf.o)(printf): warning > : interworking not enabled > first occurance : /tmp/cc00zhyh.o : thumb call to arm > /tmp//cc00zhyh.o(.text+0x4e>: In function 'main' > new.c:internal error: dangerous error > > whether i have to change anything in the ld. i have searched for the ld > source file but i counldnt get one in the ld folder. which file has to > modified first and what kind of changes are needed. The source code for that error is in the bfd directory. In general, if you want to link ARM and Thumb code together, you should compile all your code with the -mthumb-interwork option. See the documentation. Ian