Re: Threading the compiler

2006-11-10 Thread Kevin Handy

Mike Stump wrote:

...


Thoughts?



Raw thoughts:

1. Threading isn't going to help for I/O bound portions.

2. The OS should already be doing some of the work of threading.
 Some 'parts' of the compiler should already be using CPUs: 'make',
 the front-end (gcc) command, the language compiler, the assembler,
 linker, etc.

3. The OS will likely be using some of the CPUs for its own purposes:
 I/O prefetch, display drivers, sound, etc. (and these processes will
 probably increase over time as the OS vendors get used to them
 being available). Different machines will also have differing number
 of CPUs. Old systems will still have one or two cores, Some Dual
 core may have 160. What will the multi-core compiler design do to
 the old processors (extreme slowness?)

4. Will you "serialize" error messages so that two compiles of a file
 will always display the errors in the same order? Also, will the object
 files created be the same between compiles.

5. Will more "heavy" optimizations be available? i.e. Will the multi-core
 speed things up enough that really hard optimizations (speed wise)
 become reasonable?



Re: Threading the compiler

2006-11-13 Thread Kevin Handy

Paul Brook wrote:

For other optimisations I'm not convinced there's an easy win compared with 
make -j. You have to make sure those passes don't have any global state, and 
as other people have pointed out garbage collection gets messy.  The compile 
server project did something similar, and that seems to have died.
 


On the -j option:

What kind of improvements could you get by bypassing the GCC/G++/etc.
front end, and replace it with a set of rules in the makefile?
Using a set of rules to break down the compilation into the individual
steps which can be scheduled in parallel, instead of the GCC front
end running all the parts sequentially. This would allow the -j to
schedule things in a finer grain without touching the compiler itself.

It might be easier to try as a makefile->makefile transformation,
or a modified version of automake. Something like using 'gcc -###'
to split up the compile steps.

Would a transform like this be able to make use of more "free"
CPU cycles? Is there currently much in the way of "free" cycles
available that this would have any effect?



Re: We're out of tree codes; now what?

2007-03-20 Thread Kevin Handy

Jakub Jelinek wrote:

On Tue, Mar 20, 2007 at 09:37:38AM -0400, Doug Gregor wrote:
  

Even if we only use subcodes for the less often used codes, I think we
still take the performance hit. The problem is that it's very messy to



I'm sure smaller hit than when going to 9 bit tree code, and on i386/x86_64
maybe even than 16 bit tree code (if I remember well, 8-bit and 32-bit
accesses are usually faster than 16-bit ones).

  

deal with a two-level code structure inside, e.g., the C++ front end.
I did a little experiment with a very rarely used tree code
(TYPEOF_TYPE), and the results weren't promising:

 http://gcc.gnu.org/ml/gcc/2007-03/msg00493.html



If you use what has been suggested, i.e.:
#define LANG_TREE_CODE(NODE) (TREE_CODE (NODE) == LANG_CODE ? ((tree_with_subcode 
*)(NODE))->subcode : TREE_CODE (NODE))
  

This subcode idea feels like a bug attractor to me.

For example: #defines have enough problems with
side effects, and this references NODE twice, so
what happens when NODE is a function of some
kind, or has other side effects?
(Someone is going to try to be clever)

then it shouldn't be so big.  Everywhere where you don't need >= 256
codes you'd just keep using TREE_CODE, only if e.g. some
  

Wouldn't this require extra effort to know when
you should use one method of retrieving the code,
verses the other. Seems like a lot of remembering
would be necessary, which would be a good source
of bugs.

And having two different methods of getting the
"same thing" would make searching the source code for
patterns that much harder.

switch contains >= 256 FE specific subcodes you'd use LANG_TREE_CODE
instead of TREE_CODE.  GCC would warn you if you forget to use
LANG_TREE_CODE even when it is needed, at least in switches, you'd get
warning: case label value exceeds maximum value for type
  

I'd expect that the TREE_CODE would be referenced
more often in comparisons, than in switch statements.
These probably wouldn't generate the warning.



Re: Compiling GCC with g++: a report

2005-05-24 Thread Kevin Handy

Gabriel Dos Reis wrote:


Gabriel Dos Reis <[EMAIL PROTECTED]> writes:

[...]

| Attempt to get the GNU C++ compiler through the same massage is
| underway (but I'm going to bed shortly ;-)).

I can also report that I got the GNU C++ compiler through -- and apart
form uses of C++ keywords (template, namespace, class), it worked
out.  A note on type sfety issue though: lookup_name() is declared in
c-tree.h as

 extern tree lookup_name (tree);

and used in c-common.c:handle_cleanup_attribute() according to that
signature.  It is however declared and defined in cp/ as

 extern tree lookup_name (tree, int);

That was caught at link time (and dealt with).

-- Gaby
 


Would it be possible to add a diagnostic to GCC to warn when C++
keywords are being used as identifiers? Maybe also add any
objective C keywords too.

This seems like it would be useful to someone writing library
functions that could, at some later time, be imported (cut and paste)
into code for the other languages, as well as for code being converted
from C to C++.



Re: Compiling GCC with g++: a report

2005-05-24 Thread Kevin Handy

Diego Novillo wrote:


On Mon, May 23, 2005 at 01:15:17AM -0500, Gabriel Dos Reis wrote:

 


So, if various components maintainers (e.g. C and C++, middle-end,
ports, etc.)  are willing to help quickly reviewing patches we can
have this done for this week (assuming mainline is unslushed soon).
And, of course, everybody can help :-)

   


If the final goal is to allow GCC components to be implemented in
C++, then I am all in favour of this project.  I'm pretty sick of
all this monkeying around we do with macros to make up for the
lack of abstraction.


Diego.

 


It might be interesting, sometime in the future, to fork a version
of GCC into a C++ version, just to see what can be done with it.

It might make it easier for someone to write their own front/back
end by using existing classes to fill out most of the standard stuff,
and to build up trees using classes, etc.



Re: Sine and Cosine Accuracy

2005-05-26 Thread Kevin Handy

Paul Koning wrote:


"Scott" == Scott Robert Ladd <[EMAIL PROTECTED]> writes:
   



Scott> Richard Henderson wrote:
>> On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
>> 
>>> static const double range = PI; // * 2.0; static const double

>>> incr = PI / 100.0;
>> 
>> 
>> The trig insns fail with large numbers; an argument reduction loop

>> is required with their use.

Scott> Yes, but within the defined mathematical ranges for sine and
Scott> cosine -- [0, 2 * PI) -- the processor intrinsics are quite
Scott> accurate.

Huh?  Sine and consine are mathematically defined for all finite
inputs. 


Yes, normally the first step is to reduce the arguments to a small
range around zero and then do the series expansion after that, because
the series expansion convergest fastest near zero.  But sin(100) is
certainly a valid call, even if not a common one.

  paul


 


But, you are using a number in the range of 2^90, only
have 64 bits for storing the floating point representation, and
some of that is needed for the exponent.
2^90 would require 91 bits for the base alone (as an integer
value), plus a couple more for the '*PI' portion, and then
more for the exponent. And that wouldn't include anything
past the decimal point.
You are more than 30 bits short of getting a crappy result.

sin/cos/... is essentially based on the mod(n, PI) value.
To get 360 unique values, you need at least the 9 lower
bits of the number. You don't have them. That portion
of the number has fallen off the end of the representation,
and is forever lost. All you are calculating is noise.

To see this, try printing 'cos(n) - cos(n+1.0)'. If you get
something close to '0', you are outside of the functions
useful range, or just unluckey enough to be on opposite
sides of a hump (n*PI-1/2, and friends).

Or easier, try '(n + 1.0) - n'. If you don't get something
close to 1.0, you've lost.

$ vi check.c
#include 
#include 

#define PI 3.1415926535 /* Accurate enough for this test */

int main()
{
   double n = PI * pow(2.0, 90.0);

   printf("Test Add %f\n", (n+1) -n);
   printf("Test cos %f\n", cos(n) - cos(n+1));
}

$ gcc check.c  -lm
$ ./a.out
Test Add 0.00
Test cos -0.00



Re: Some notes on the Wiki

2005-07-11 Thread Kevin Handy

Paul Koning wrote:


"Joseph" == Joseph S Myers <[EMAIL PROTECTED]> writes:
   



Joseph> On Mon, 11 Jul 2005, Michael Cieslinski wrote:
>> I also could convert parts of the ggcinternals manual into wiki
>> pages.  But only if there is a consensus about this being the way
>> to go.

Joseph> I'm sure it's the wrong way to go.  I find a properly
Joseph> formatted and indexed book far more convenient for learning
Joseph> about substantial areas of compiler internals, or for finding
Joseph> what some particular macro is specified to do, than a wiki.

I'll second that.  Unlike some other major GNU projects, GCC's
internals manual is substantial and very good.  Yes, it needs ongoing
improvement, but I'd prefer that rather than flipping to Twiki.

 


In order to show how good the internals documents are, try to
build a very simple front end using ONLY the documentation.
Make it of the order of a hardwired "int main() { return 0}".
Or better yet, find an outsider who knows C, but not GCC
internals, to write it.

No outside source can be used (i.e. no source code not included
in the documentation).

It cannot be done. Not even close. Not even if you allow tree.def.

Too much stuff exists outside of the documentation.



Re: Problems on Fedora Core 4

2005-07-20 Thread Kevin Handy

Michael Gatford wrote:

We compile the following code with gcc (historically 2.95.3, 
egcs-2.91.66 or VC5/6 on Windows).



 std::map quickfindtag;



Shouldn't 'string' be 'std::string' also?

I have just installed Fedora Core 4 and am trying to compile it with 
gcc 4.0.0 (Redhat 4.0.0-8). However I get the error message:


FmDisplay.h: In member function 'mapTags::doCommand(X*, const char*)'
FmDisplay.h:61: error: expected `;' before 'functionIterator'
FmDisplay.h:62: error: 'functionIterator' was not declared in this scope

I do not know why this is happening. I installed gcc 2.95.3 and got 
exactly the same error message so it is possibly to do with the Fedora 
installation rather than gcc.


The same code compiles ok on Solaris 5.6/7 with egcs-2.91.66 and on 
Redhat 8.0 with gcc 2.95.3.


Thanks in advance.

Mike





Re: A couple more subversion notes

2005-10-20 Thread Kevin Handy

Ian Lance Taylor wrote:


Richard Guenther <[EMAIL PROTECTED]> writes:

 


If it is at all possible we should probably try to keep read-only CVS working
(and up-to-date) for HEAD and release-branches.  This will allow occasional
contributors and technically-less-provided people to continue working in
submit-patch mode or in regular testing without raising the barrier for them.

I guess it should be possible with some commit-trigger scripts which svn
surely has?
   



I think that's a good idea.  But I don't think it's fair to expect
Daniel to write it.  It should be feasible for any sufficiently
interested person to write a script to dump out a patch from SVN,
queue up the patches, and apply them to the CVS repository.  In fact
this doesn't even have to be driven from SVN commit scripts.  It
obviously doesn't have to be a real-time operation, and could be done
at any time.  For example a cron job could simply grab a diff of
everything since the last time it ran and then apply it to the CVS
repository.  The only even slightly tricky part would be getting the
cvs add and rm commands right.  We could run that script an hour.
Anybody who needs more cutting edge sources can switch to SVN.
 


Would it be possible to write a cvs read-only interface to the
svn database? i.e. replace the cvs server with a  svn-cvs emulation
layer.

It would mean that the fake "cvs" server would always be
up-to-date, but you will probably not be able to do everything
a real cvs does. Would this limited cvs access be enough for
most users?



ixp425 configuration

2009-08-24 Thread Kevin Handy
Is it possible to configure a working gcc cross compiler from the Intel 
ixp425 processor
(linux system OS build, trying to update to use eabi)? This is an arm 
xscale processor
without floating point. Every --target I've tried fails compiling with 
build errors, including
missing header files, configuration not supported, and assorted 
configuration errors. I
believe that the target should be 'armv4t-linux-eabi', or something 
close, but I cannot get
past the build errors on the various versions of gcc I've tried. I am 
doing the build as
properly as I understand it: separate directory, 
../gcc-4.4.1/configure..., make.


I have also tried 'xscale-elf' on 4.4.1, but that configuration is not 
accepted, although it is

on 4.2.3.

I also limit the build to '--language=c' to simplify the build. Am I 
configuring it wrong, or

does gcc not support the arm xscale chips any more?



armv4t

2009-09-21 Thread Kevin Handy

What version of GCC will build for a cross --target=armv4t-linux-eabi,
which I believe is the right code for an ixp425 processor? The host
compiler is gcc-4.3.3 on a Linux-debian-test system. I have also tried
unsuccessfully tried the armv5t target, with similar results.

I have tried numerous versions, and get nothing but shell errors
(such as "cannot compute suffix of object files"), and numerous syntax
errors with the gcc source. I have built --target=xscale-elf in the past
without any problems, but that target apparently no longer exists.

In which version of gcc was xscale-elf dropped?

I am trying to get a version of the gcc compiler that will compile for
and ixp425 processor with the newer eabi, but I cannot find any version
that will accept the target options, and successfully build. Nor can I get
it to build with any --target I could thing of, eabi or otherwise.

Has support the arm systems been dropped in gcc? and if so, in which
version?

I need to find out where to look for a functional version of the gcc
cross compiler for this cpu.



Re: Reconsidering gcjx

2006-01-31 Thread Kevin Handy

Andrew Haley wrote:


Tom Tromey writes:
> > "Thorsten" == Thorsten Glaser <[EMAIL PROTECTED]> writes:
> 
> >> ecj is written in java.  This will complicate the bootstrap process.
> 
> Thorsten> Why not keep enough support in jc1 to bootstrap ecj?
> 
> We don't know how much of the language that would be.


And we can't tell _a priori_.  As I understand it, the intention is to
use upstream sources, and they will change.

 

Don't you just need to have a functional JVM, and .class (.jar) files 
for ecj

and all its libraries? That would change the question to what language
the JVM is written in.



Re: incompatible implicit declaration of built-in function 'exit'

2006-03-28 Thread Kevin Handy

Jack Howarth wrote:


  I am trying to compile some fairly old legacy c code with gcc 4.1
in FC5 and have been able to eliminate all the compiler warnings save
one...

warning: incompatible implicit declaration of built-in function 'exit'

 


It should be defined in .
Is there a missing #include for that?


...which is repeated through the compilation of the sources. I can
google lots of build logs with this warning but haven't been able
to get any hits on patches or fixes to eliminate the warning. Thanks
in advance for any hints.
  Jack

 





Re: Very Fast: Directly Coded Lexical Analyzer

2007-05-31 Thread Kevin Handy

Diego Novillo wrote:

We are *always* interested in making GCC faster.  All you need now is a
copyright assignment, the willingness to do the work (or find someone to
do it for you) and the time to implement it.

200% speed gains are nice, especially if they can be replicated outside
the lab.
  


What does a 200% speedup mean?

If a program runs in 10 seconds, then a 200% speedup would
mean it runs (10 * 200/100) 20 seconds faster, thus it runs in
-10 seconds. Must be nice having the results before you start
the program.