Re: sched2, ret, use, and VLIW bundling

2009-06-08 Thread Maxim Kuvyrkov

DJ Delorie wrote:

I'm working on a VLIW coprocessor for MeP.  One thing I noticed is
that sched2 won't bundle the function's RET with the insn that sets
the return value register, apparently because there's an intervening
USE of that register (insn 30 in the example below).

Is there any way around this?  The return value obviously isn't
actually used there, nor does the return insn need it - that USE is
just to keep the return value live until the function exits.


The problem may be in the dependency cost between the SET (insn 27) and 
the USE (insn 30) being >= 1.  Have you tried using 
targetm.sched.adjust_cost() hook to set the cost of USE to 0?


Anyway, this seems strange, the scheduler should just output the USEs as 
soon as they are ready.  One of the few places this can be forced untrue 
is targetm.sched.dfa_new_cycle() hook; does your port define it?


--
Maxim


Re: LLVM as a gcc plugin?

2009-06-08 Thread Steven Bosscher
On Mon, Jun 8, 2009 at 3:10 AM, Rafael Espindola wrote:
>> GMP and MPFR are required components of GCC, and every developer has to
>> deal with them.  For interfacing between GCC and LLVM, the experts who'll
>> be able to answer the questions are generally going to be found on the
>> LLVM lists, not the gcc list, and those (like you) who participate on
>> both lists, well, you're on both lists.
>
> That is not the case here. There is already a version of gcc that uses
> llvm.

I'd turn that around: There is already a version of LLVM that uses
GCC.  I don't see any way in which the FSF GCC benefits from this. And
since this list concerns the FSF GCC...

Ciao!
Steven


Information

2009-06-08 Thread Mr Joe Brown

Hi Dearest.

I will like to invest in your country,I will like to know the proceedings of a 
non-Citizen investing in your country? Actually I am contacting you 
‘outstanding that l cannot invest in your country without an assistant from 
someone from your country. 

Factually I want you to advice me on a lucrative business there in your country 
that l can invest on, and the procedure before the investment. Thank you very 
much for taking time to go through my e-mail poko...@sify.com and l hope to 
read your reply very soon. 


God bless you.
My best regards.
Mr. Joe Brown
Email: poko...@sify.com






Information

2009-06-08 Thread Mr Joe Brown

Hi Dearest.

I will like to invest in your country,I will like to know the proceedings of a 
non-Citizen investing in your country? Actually I am contacting you 
‘outstanding that l cannot invest in your country without an assistant from 
someone from your country. 

Factually I want you to advice me on a lucrative business there in your country 
that l can invest on, and the procedure before the investment. Thank you very 
much for taking time to go through my e-mail poko...@sify.com and l hope to 
read your reply very soon. 


God bless you.
My best regards.
Mr. Joe Brown
Email: poko...@sify.com






Re: Intermediate representation

2009-06-08 Thread Ian Lance Taylor
Nicolas COLLIN  writes:

> I want to go through the entire internal tree in GCC but I have a
> problem with functions.
> Indeed I would like to know the declarations and functions called by a
> function.
> I assume I have to go into the function's scope but I don't know how.
> I read the source code but I didn't find anything.

As what point in the compilation process do you want to do that?

After the conversion to GIMPLE, and assuming you have the function decl,
you just walk through the GIMPLE code looking for GIMPLE_CALLs.  To see
the declarations look at the GIMPLE_BINDs.

Ian


Re: The C++0x lambda branch

2009-06-08 Thread Maik Beckmann
Esben Mose Hansen schrieb am Montag 27 April 2009 um 20:54:
> Hi,
>
> I have very much been looking forward to the including of the lambda part
> of C++0x. I have been playing around with the lambda branch of gcc, which
> at least superficially works well apart from assorted bugs.
>
> What can I do to make lambdas part of a future release of gcc?
>
> Where do I report/track bugs to the lambda branch?
>
> Or should I contact the maintainer directly?

Is there schedule for merging c++0x lambda support into mainline?

-- Maik



Intermediate representation

2009-06-08 Thread Nicolas COLLIN

 In my version DECL_SAVED_TREE is defined as :
#define DECL_SAVED_TREE(NODE)DECL_MEMFUNC_POINTER_TO (NODE)
I just looked at DECL_MEMFUNC and it doesn't do what I want.
Then I don't know how to get the statements in the FUNCTION_DECL I got.

Nicolas COLLIN

Ian Lance Taylor a écrit :

Nicolas COLLIN  writes:

In fact, I go through the tree in the function finish_decl in the file gcc/cp/
decl.c
I see every decl and struct in the tree, but my problem is with the node
FUNCTION_DECL : I don't know how to get every statements in it. I tried the
macro DECL_SAVED_TREE but it doesn't work. Maybe because the version I use is
older than the version you describe in the doc , could you give me the source
code of the precedent macro, thus I 'll may be able to see what goes wrong.


Please reply to the mailing list, not just to me.

You can only look at the statements after parsing the function is
complete.  At that time DECL_SAVED_TREE should contain a valid value.

Ian


c++0x concepts support

2009-06-08 Thread Onay Urfalioglu
Hi,

i am wondering if the concepts branch/support is totally unmaintained or is 
there still anyone working on it?

AFAIK, Herb Sutter quit working on the branch a while ago. As the standard is 
almost finished, should'nt we more aggressively advertising/motivating some 
dev's to work on this one?

oni


Re: LLVM as a gcc plugin?

2009-06-08 Thread Rafael Espindola
> I'd turn that around: There is already a version of LLVM that uses
> GCC.  I don't see any way in which the FSF GCC benefits from this. And
> since this list concerns the FSF GCC...

That is not a valid turn around. We know that the existing LLVM can handle
this. We are not sure if the existing plugin infrastructure can.

That also goes to the more general question "should plugin development be
discussed on this list?". One of the main uses of plugins will probably be
adding features that are not of interest to GCC in general.

> Ciao!
> Steven
>


Cheers,
-- 
Rafael Avila de Espindola

Google | Gordon House | Barrow Street | Dublin 4 | Ireland
Registered in Dublin, Ireland | Registration Number: 368047


Re: Intermediate representation

2009-06-08 Thread Ian Lance Taylor
Nicolas COLLIN  writes:

>  In my version DECL_SAVED_TREE is defined as :
> #define DECL_SAVED_TREE(NODE)DECL_MEMFUNC_POINTER_TO (NODE)
> I just looked at DECL_MEMFUNC and it doesn't do what I want.
> Then I don't know how to get the statements in the FUNCTION_DECL I got.

You must be working with gcc before 4.0.  I don't know what to suggest,
except that it will be easier if you work with current gcc.

Ian


Re: [fortran] Different FUNC_DECLS with the same DECL_NAME - MAIN__ and named PROGRAM main functions [was Re: gcc-4.5-20090528 is now available]

2009-06-08 Thread Dave Korn
Jerry DeLisle wrote:
> Tobias Burnus wrote:

>> @@ -3874,6 +3877,8 @@ create_main_function (tree fndecl)
>>tmp =  build_function_type_list (integer_type_node, integer_type_node,
>>build_pointer_type (pchar_type_node),
>>NULL_TREE);
>> +  main_identifier_node = get_identifier ("main");
>> +  ftn_main = build_decl (FUNCTION_DECL, main_identifier_node, tmp);
>>ftn_main = build_decl (FUNCTION_DECL, get_identifier ("main"), tmp);
>>DECL_EXTERNAL (ftn_main) = 0;
>>TREE_PUBLIC (ftn_main) = 1;
>>
>>
> Tobias and Dave,
> 
> I tested the above on x86-64 Linux.  OK to commit.

  I just took a second look at this.  We surely didn't mean to build two decls
and throw one away, did we?  I think the second assignment to ftn_main was
supposed to have been deleted when the middle argument was changed.  It looks
harmless but superfluous to me.  I'll just double-check that removing it
doesn't break anything.  Ok if so?

gcc/fortran/ChangeLog

* trans-decl.c (create_main_function):  Don't build main decl twice.

cheers,
  DaveK
Index: gcc/fortran/trans-decl.c
===
--- gcc/fortran/trans-decl.c	(revision 148276)
+++ gcc/fortran/trans-decl.c	(working copy)
@@ -3876,7 +3876,6 @@
    NULL_TREE);
   main_identifier_node = get_identifier ("main");
   ftn_main = build_decl (FUNCTION_DECL, main_identifier_node, tmp);
-  ftn_main = build_decl (FUNCTION_DECL, get_identifier ("main"), tmp);
   DECL_EXTERNAL (ftn_main) = 0;
   TREE_PUBLIC (ftn_main) = 1;
   TREE_STATIC (ftn_main) = 1;


Re: VTA merge?

2009-06-08 Thread Frank Ch. Eigler
Alexandre Oliva  writes:

>> Do you have any of them handy (memory use, compile time with release
>> checking only, etc) so that we can start the public
>> argument^H^H^H^H^H^discussion?

> I don't, really.  Part of the guidance I expected was on what the
> relevant measures should be.  [...]

Well, disregard "disruptiveness" for now, which people can judge for
themselves by looking at the new code.

As for "costs" in terms of compile time/space and output size, you
should definitely present some preliminary data please.  For example,
the time/space for a plain bootstrap with vs. without the vta patches
applied.  Then another one with "-g" vs "-g0" vs whatever corresponds
to "full vta" - local variable debuginfo.

As for "benefits", you could give some gdb (or systemtap :-) session
transcripts that show the new data being used.


- FChE


Re: VTA merge?

2009-06-08 Thread Diego Novillo
On Sun, Jun 7, 2009 at 16:04, Alexandre Oliva wrote:

> So the question is, what should I measure?  Memory use for any specific
> set of testcases, summarized over a bootstrap with memory use tracking
> enabled, something else?  Likewise for compile time?  What else?

Some quick measurements I'd be interested in:

- Size of the IL over some standard code bodies
  (http://gcc.gnu.org/wiki/PerformanceTesting).
- Memory consumption in cc1/cc1plus at -Ox -g over that set of apps.
- Compile time in cc1/cc1plus at -Ox -g.
- Performance differences over SPEC2006 and the other benchmarks
  we keep track of.

Do all these comparisons against mainline as of the last merge
point.

The other set of measurements that would be interesting are
probably harder to specify.  I would like to have a set of
criteria or guidelines of what should a pass writer keep in mind
to make sure that their transformations do not butcher debug
information.  From what I understand, there are two situations
that need handling:

- When doing analysis, passes should explicitly ignore certain
  artifacts that carry debugging info.

- When applying transformations, passes should
  generate/move/modify those artifacts.

Documentation should describe exactly what those artifacts are
and how should they be handled.

I'd like to have a metric of intrusiveness that can be tied to
the quality of the debugging information:

- What percentage of code in a pass is dedicated exclusively to
  handling debug info?
- What is the point of diminishing returns?  If I write 30% more
  to keep track of debug info, will the debug info get 30%
  better?
- What does it mean for debug info to be 30% better?  How do
  we measure 'debug info goodness'?
- Does keeping debug info up-to-date introduce algorithmic
  changes to the pass?

Clearly, if one needs to dedicate a large portion of the pass
just to handle debug information, that is going to be a very hard
sell.  Keeping perfect debug information at any cost is not
sustainable long term.


Diego.


Re: [fortran] Different FUNC_DECLS with the same DECL_NAME - MAIN__ and named PROGRAM main functions [was Re: gcc-4.5-20090528 is now available]

2009-06-08 Thread Tobias Burnus
Dave Korn wrote:
>>> +  main_identifier_node = get_identifier ("main");
>>> +  ftn_main = build_decl (FUNCTION_DECL, main_identifier_node, tmp);
>>>ftn_main = build_decl (FUNCTION_DECL, get_identifier ("main"), tmp);
>>>   
> I just took a second look at this.  We surely didn't mean to build two decls
> and throw one away, did we?
Why have I always read

-   ftn_main = build_decl (FUNCTION_DECL, get_identifier ("main"), tmp);

although there was no "-"?


> I think the second assignment to ftn_main was supposed to have been
> deleted when the middle argument was changed. Ok if so?
Yes, of cause it should have been deleted. OK for the trunk and thanks
for spotting it!

Tobias

> gcc/fortran/ChangeLog
>
> * trans-decl.c (create_main_function): Don't build main decl twice.


Please update http://gcc.gnu.org/gcc-4.3/buildstat.html

2009-06-08 Thread Dennis Clarke

Re: http://gcc.gnu.org/gcc-4.3/buildstat.html

I was looking for testsuite results to compare with on Solaris and I saw
that nearly every report for GCC 4.3.3 was done by Tom G. Christensen.

All GCC 4.3.3 reports on Solaris from one person :

i386-pc-solaris2.6  Test results: 4.3.3
i386-pc-solaris2.8  Test results: 4.3.3
i386-pc-solaris2.9  Test results: 4.3.3
i386-pc-solaris2.10 Test results: 4.3.3

sparc-sun-solaris2.6Test results: 4.3.3
sparc-sun-solaris2.7Test results: 4.3.3
sparc-sun-solaris2.8Test results: 4.3.3

I think it is great we have any report at all but for the sake of
diversity and some comparison data I'll add mine in there :

Results for 4.3.3 (GCC) testsuite on sparc-sun-solaris2.8
http://gcc.gnu.org/ml/gcc-testresults/2009-06/msg00501.html

I'll get the i386 reports later this week.

-- 
Dennis Clarke
http://www.blastwave.org/




error in gfc_simplify_expr

2009-06-08 Thread Revital1 Eres

Hello,

I get the following error while bootstrap  trunk -r148275 on ppc.

Thanks,
Revital

/home/eres/mainline_45/build/./prev-gcc/xgcc
-B/home/eres/mainline_45/build/./prev-gcc/
-B/usr/local/powerpc64-unknown-linux-gnu/bin/
-B/usr/local/powerpc64-unknown-linux-gnu/bin/
-B/usr/local/powerpc64-unknown-linux-gnu/lib/
-isystem /usr/local/powerpc64-unknown-linux-gnu/include
-isystem /usr/local/powerpc64-unknown-linux-gnu/sys-include-c  -O3
-DIN_GCC   -W -Wall -Wwrite-strings -Wstrict-prototypes
-Wmissing-prototypes -Wcast-qual -Wold-style-definition -Wc++-compat
-Wmissing-format-attribute -pedantic -Wno-long-long -Wno-variadic-macros
-Wno-overlength-strings -Werror -fno-common  -DHAVE_CONFIG_H -I. -Ifortran
-I../../gcc/gcc -I../../gcc/gcc/fortran -I../../gcc/gcc/../include
-I../../gcc/gcc/../libcpp/include -I/home/eres/mainline_45/build/./gmp
-I/home/eres/mainline_45/gcc/gmp -I/home/eres/mainline_45/build/./mpfr
-I/home/eres/mainline_45/gcc/mpfr  -I../../gcc/gcc/../libdecnumber
-I../../gcc/gcc/../libdecnumber/dpd
-I../libdecnumber../../gcc/gcc/fortran/expr.c -o fortran/expr.o
cc1: warnings being treated as errors
../../gcc/gcc/fortran/expr.c: In function גgfc_simplify_exprג:
../../gcc/gcc/fortran/expr.c:1660:8: error: גstartג may be used
uninitialized in this function
../../gcc/gcc/fortran/expr.c:1655:15: error: גendג may be used
uninitialized in this function
make[3]: *** [fortran/expr.o] Error 1
make[3]: Leaving directory `/home/eres/mainline_45/build/gcc'
make[2]: *** [all-stage2-gcc] Error 2
make[2]: Leaving directory `/home/eres/mainline_45/build'
make[1]: *** [stage2-bubble] Error 2
make[1]: Leaving directory `/home/eres/mainline_45/build'
make: *** [bootstrap] Error 2

Re: error in gfc_simplify_expr

2009-06-08 Thread Tobias Burnus
Hello,

Revital1 Eres wrote:
> I get the following error while bootstrap  trunk -r148275 on ppc.

Worked with r148271 on x86-64-linux.

> -I../libdecnumber../../gcc/gcc/fortran/expr.c -o fortran/expr.o
> cc1: warnings being treated as errors
> ../../gcc/gcc/fortran/expr.c: In function גgfc_simplify_exprג:
> ../../gcc/gcc/fortran/expr.c:1660:8: error: גstartג may be used
> uninitialized in this function

The code is new but seems to be OK:

  1657if (p->ref && p->ref->u.ss.start)
  1658  {
  1659gfc_extract_int (p->ref->u.ss.start, &start);
  1660start--;  /* Convert from one-based to zero-based.

I don't see why "start" should be uninitialized here.

Tobias



What is -3.I (as opposed to 0-3.I) supposed evaluate to?

2009-06-08 Thread Kaveh R. GHAZI
If I write a complex double constant -3.I (as opposed to 0-3.I), what is
it supposed to evaluate to?  This program:

  #include 

  int main(void)
  {
const __complex double C1 = (-3.I);
const __complex double C2 = (0-3.I);

printf ("%f %f\n", __real__ C1, __imag__ (C1));
printf ("%f %f\n", __real__ C2, __imag__ (C2));

return 0;
  }

when compiled with gcc-4.1.2 (and mainline) yields:

-0.00 -3.00
0.00 -3.00

Note the sign difference in the real part.

When I compile it with g++-4.1.2, I get:

compl.c: In function 'int main()':
compl.c:5: error: wrong type argument to unary minus

Is this supposed to happen or is it a bug in complex number parsing?
(Sorry if this is a gcc-help question.)

Thanks,
--Kaveh


Re: What is -3.I (as opposed to 0-3.I) supposed evaluate to?

2009-06-08 Thread Joseph S. Myers
On Mon, 8 Jun 2009, Kaveh R. GHAZI wrote:

> If I write a complex double constant -3.I (as opposed to 0-3.I), what is
> it supposed to evaluate to?  This program:

Because GCC does not implement imaginary types, this applies unary minus 
to 0.0+3.0I.  Whereas 0-3.I means 0.0 - (0.0+3.0I), a mixed real/complex 
operation, the real part of whose result is 0.0 except when rounding 
towards negative infinity when it is -0.0.  There are lots of tests in 
gcc.dg/torture/complex-sign*.

-- 
Joseph S. Myers
jos...@codesourcery.com


Re: VTA merge?

2009-06-08 Thread Alexandre Oliva
On Jun  8, 2009, Diego Novillo  wrote:

> - Performance differences over SPEC2006 and the other benchmarks
>   we keep track of.

This one is trivial: none whatsoever.  The generated code is the same,
and it *must* be the same.  Debug information must never change the
generated code, and VTA is all about debug information.  There's a lot
of infrastructure to ensure that code remains the unchanged, and
-fcompare-debug testing backs this up.  It doesn't make much sense to
run the same code twice to verify that it performs the same, does it?
:-)


> Do all these comparisons against mainline as of the last merge
> point.

I'll start performing the other measurements you requested.  Please be
patient, it will take some time until I figure out how to use the
scripts you pointed at me and locate the code bases.

For the measurements, I won't use the last merge, but rather the trunk
(in which most of the infrastructure patches were already installed,
with minor changes) vs trunk+the posted patchset.  Or maybe I'll do
another merge into the branch, so that we have exact revisions in the
SVN tree to refer to.  I hope you don't mind that I make the tests in a
slightly different tree (it's easier for me, and shouldn't make any
difference for you), but if you insist, I'll do exactly what you
suggested.


> I would like to have a set of criteria or guidelines of what should a
> pass writer keep in mind to make sure that their transformations do
> not butcher debug information.

I've already written about this.  Butchering debug information with this
design is very hard.  Basically, you have to work very hard to break it,
because it's designed so that, unless you actively stop transformations
that are made to executable code from also applying to debug
annotations, you'll keep it up to date and correct.

What needs to be taken care of is something else: avoiding codegen
differences.  This means that whatever factors you use to make decisions
on whether or not to make a transformation shouldn't take debug
annotations into account.  E.g., if you count how many references there
are to a certain DEF, don't take the debug USEs into account.  If you
count how many STMTs there are in a function or block to decide whether
to inline it or duplicate it, don't count the annotations.

And then, in the cases in which an transformation is made when there is
only one (non-debug) reference to a name, it is probably useful to
update any debug insns that refer to that name.  If you don't, debug
info will be less complete, but still correct, at least in SSA land.  In
post-reload RTL it's more important to fix these things up, otherwise
you might end up with incorrect debug info.

That's all.  Doesn't sound that bad, does it?

> From what I understand, there are two situations
> that need handling:

> - When doing analysis, passes should explicitly ignore certain
>   artifacts that carry debugging info.

Yup.  This is where most of the few changes go.  If you fail to do that
where you should, you get -fcompare-debug errors or slightly different
code.

> - When applying transformations, passes should
>   generate/move/modify those artifacts.

Only in very rare circumstances (1- or 0- refs special cases) do they
need special attention.  In nearly all cases, because of their nature,
they're correctly updated just like the optimizer would have to do to
any other piece of code.

> Documentation should describe exactly what those artifacts are
> and how should they be handled.

Are the 3 paragraphs above clear enough?

When in the documentation do you suggest this should go?

> - What percentage of code in a pass is dedicated exclusively to
>   handling debug info?

In nearly all of the tree passes, it's one or two lines per file, if
it's that much.  In RTL it's sometimes a bit more than that.

> - What is the point of diminishing returns?  If I write 30% more
>   to keep track of debug info, will the debug info get 30%
>   better?

See below.

> - What does it mean for debug info to be 30% better?  How do
>   we measure 'debug info goodness'?

I don't know how to measure “30% better” debug info.  Do you have a
criterion to suggest?

I see at least two dimensions for measuring debug info improvements:
correctness and completeness.  Currently we suck at both.

VTA's design is such that the infrastructure work I've done over its
development addresses teh correctness problem once and for all.  The
remaining improvements are in completeness, and those are going to be
(i) in the var-tracking pass and debug info emitters, that still can't
or don't know how to use all the information that reaches them, and (ii)
in passes that currently discard or invalidate debug annotations (so
that variables end up marked as optimized out), but that could retain it
with a bit of additional work.  I don't have any actual examples of
(ii), I'm only aware of their theoretical possibility, so I can't
quantify additional work required for that.  That said, the additional
w

Re: VTA merge?

2009-06-08 Thread Joe Buck
On Mon, Jun 08, 2009 at 02:03:53PM -0700, Alexandre Oliva wrote:
> On Jun  8, 2009, Diego Novillo  wrote:
> 
> > - Performance differences over SPEC2006 and the other benchmarks
> >   we keep track of.
> 
> This one is trivial: none whatsoever.  The generated code is the same,
> and it *must* be the same.  Debug information must never change the
> generated code, and VTA is all about debug information.  There's a lot
> of infrastructure to ensure that code remains the unchanged, and
> -fcompare-debug testing backs this up.  It doesn't make much sense to
> run the same code twice to verify that it performs the same, does it?

I haven't kept careful track, but at one point you were talking about
inhibiting some optimizations because they made it harder to keep the
debug information precise.  Is this no longer an issue?  Do you require
that any optimizations that are now in the trunk be disabled?


Re: VTA merge?

2009-06-08 Thread Alexandre Oliva
On Jun  7, 2009, Eric Botcazou  wrote:

>> It would be nice if it worked this way, but the dozens of patches to fix
>> -g/-g0 compile differences I posted over the last several months show
>> it's really not that simple, because the codegen IR does not tell the
>> whole story.  We have kind of IR extensions for debug info, for types
>> and templates, for aliasing information, even for GC of internal data
>> structures, and all of these do affect codegen, sometimes in very subtle
>> ways.

> Yes, precisely, they are IR extensions, most passes shouldn't have to bother 
> with them.

But they do, and we don't mind.  Just count the occurrences of
preserving/copying locations in expr trees, in insns, attributes in REGs
and MEMs.  It's quite a lot.

> Fixing bugs there can probably be done once for all passes.

Unfortunately that's not how things have worked in the past.  Every pass
had to be adjusted over time to stop debug info from being lost, and
Andrew says there's still a lot of work to be done just for correct line
number info, as he found out trying to stuff variable location
information into the infrastructure we used for line numbers.

On top of that, the assumption that the extensions don't require passes
to bother with them is unfortunately false.  The misuse of the
extensions has caused a number of codegen bugs over time, and the
patches I posted and installed recently are only a few examples of that;
several others were posted, approved and installed along the way over
the past year or so.  They weren't debug info errors, they were codegen
errors caused by existing debug info IR extensions.

Part of the problem, I think, is precisely that they were so invisible
that people often forgot their existence and their interaction, and got
sloppy about it beyond the point of rupture.  That's why, in my design,
I focused on optimizing for sloppiness: if you do nothing, you still get
debug info right, and if you care only about codegen, you will notice
codegen issues if you forgot to take debug info into account where it
mattered.

> So, in the end, we seem to agree that your approach is 
> fundamentally different from what we have now.

In some senses, yes.  In others, it's quite the opposite.

It's no different in that it's still there, but mostly unnoticed, like
file:line information and REG/MEM attrs.

It's completely different in that, if you totally forget about it, you
don't get broken auto var location debug info, and you might actually be
reminded, during testing, that your code failed to take debug info into
account, because it caused codegen differences which you do care about.

Isn't that great?

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: New Toshiba Media Processor (mep-elf) port and maintainer

2009-06-08 Thread DJ Delorie

> Pending initial (technical) approval

So... Can I get a global maintainer to approve it?


Re: VTA merge?

2009-06-08 Thread Alexandre Oliva
On Jun  8, 2009, Joe Buck  wrote:

> I haven't kept careful track, but at one point you were talking about
> inhibiting some optimizations because they made it harder to keep the
> debug information precise.  Is this no longer an issue?

No, it never was, it must have been some misunderstanding.  I've never
planned on inhibiting any optimizations whatsoever as part of VTA.  The
plan has always been to represent the result of optimizations, not to
modify optimizers.

I suppose there may have been some confusion because of the patch to do
less SSA coalescing to try to improve debug info, long before VTA even
started.  This issue came up again after VTA development was underway,
when it became clear that we could coalesce more, rather than less, and
still get correct and complete debug info.

It is the current trunk code that throttles optimization for better
debug information.  VTA doesn't need that.

-- 
Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/
You must be the change you wish to see in the world. -- Gandhi
Be Free! -- http://FSFLA.org/   FSF Latin America board member
Free Software Evangelist  Red Hat Brazil Compiler Engineer


Re: What is -3.I (as opposed to 0-3.I) supposed evaluate to?

2009-06-08 Thread Kaveh R. Ghazi

From: "Joseph S. Myers" 


On Mon, 8 Jun 2009, Kaveh R. GHAZI wrote:


If I write a complex double constant -3.I (as opposed to 0-3.I), what is
it supposed to evaluate to?  This program:


Because GCC does not implement imaginary types, this applies unary minus
to 0.0+3.0I.  Whereas 0-3.I means 0.0 - (0.0+3.0I), a mixed real/complex
operation, the real part of whose result is 0.0 except when rounding
towards negative infinity when it is -0.0.  There are lots of tests in
gcc.dg/torture/complex-sign*.


Okay thanks.

Perhaps the only safe way to create the value, even in the presence of 
rounding mode changes, is to use conj(3.I) ?


   --Kaveh



several installed gcc, or libdir should depend upon -program-suffix...

2009-06-08 Thread Basile STARYNKEVITCH

Hello All,

I want to install several variants of gcc, to be specific: the trunk, 
the lto branch, the MELT branch (all in the same prefix ie /usr/local)


I thought that just configuring each variant with its own program suffix 
would be enough, so I configured the trunk with --program-suffix=-trunk, 
the LTO branch with --program-suffix=-lto, the MELT branch with 
--program-suffix=-melt


However, this does not work, since all three installations have the same 
libexecsubdir that is /usr/local/libexec/gcc/x86_64-unknown-linux-gnu/4.5.0


Is there a configure option I missed? I definitely want all three to be 
in the same prefix, and the user programs all be under /usr/local/bin ie 
/usr/local/bin/gcc-{trunk,lto,melt}


Or should we patch the gcc/Makefile.in and probably the gcc/configure.ac 
so that libsubdir and libexecsubdir depends upon the program-suffix at 
configure time?


How do you folks have several GCC installed at the same prefix?

Regards.

PS: I hope all gcc summit attendees have a nice summit.

--
Basile STARYNKEVITCH http://starynkevitch.net/Basile/
email: basilestarynkevitchnet mobile: +33 6 8501 2359
8, rue de la Faiencerie, 92340 Bourg La Reine, France
*** opinions {are only mines, sont seulement les miennes} ***