Root Node of AST

2005-03-12 Thread Rajesh Babu
Hi,
	I want the root node of the AST built by gcc, so that I can do the 
manipulations I want to do on the intermediate nodes of AST.  Can someone 
tell me where I can find the root node of AST and the place where the 
construction of AST finishes in GCC source.

Thanks in Advance
Rajesh Babu


PRE in GCC-3.3.3

2005-03-12 Thread Rajesh Babu
Hi,
	I found that PRE is not done in gcc-3.3.3, though the code for 
doing PRE exists in source code.

	In the following example, expression a+b is partially redundant 
and must be eliminated, but it is not done when gcc-3.3.3 is used, where 
as it is done when gcc-3.4.1 is used.

Can someone confirm me that this is the problem with gcc-3.3.3.
foo ()
{
int a,b,c,d;
scanf ("%d%d",&a,&b);
if (a)
{
c = a + b;
}
else
{
d = c++;
}
d = c;
c = a + b;
printf ("%d%d",c,d);
}
Rajesh Babu


Re: Root Node of AST

2005-03-12 Thread Diego Novillo
On 03/12/05 08:14, Rajesh Babu wrote:
I want the root node of the AST built by gcc, so that I can do the 
manipulations I want to do on the intermediate nodes of AST.  Can 
someone tell me where I can find the root node of AST and the place 
where the construction of AST finishes in GCC source.

If you are working in mainline or GCC 4.0, start with 
tree-optimize.c:init_tree_optimization_passes.  There's documentation 
about writing your own passes in doc/tree-ssa.texi.

Diego.


Re: PR debug/19345

2005-03-12 Thread Daniel Berlin

On Fri, 11 Mar 2005, Jason Merrill wrote:
On Tue, 08 Mar 2005 10:28:44 -0500, Daniel Berlin <[EMAIL PROTECTED]> wrote:
However, according to Jakub,
"
TYPE_NAME (TYPE_MAIN_VARIANT (origin)) on that testcase is NULL, so it
doesn't help match."
...which is why we still need TYPE_STUB_DECL.  It seems to be the only way
we can reliably find a decl for a RECORD_TYPE.  Except that in this
testcase we can't.
Since this was our proposed solution, i think the only option we have at
this point is to remove the type = TYPE_STUB_DECL (type) line, and just
ignore TYPE_STUB_DECL.
No, that would completely break that piece of code, which is trying to find
the right function to force out.
Sigh.

Either that, or someone needs to fix the "real underlying bug" here, if
we can figure out what that is.
I think the real bug is that TYPE_STUB_DECL isn't being set for the
RECORD_TYPE.
Hmmm.
I don't know when i'll have time to try to figure out why this is the 
case.


BTW, why isn't this cc'd to a mailing list?
Accident.  I thought i did.
It is now.
Jason


Re: Root Node of AST

2005-03-12 Thread Rajesh Babu
I am working with gcc-3.3.3, and I want to insert my module after 
construction of AST and before RTL generation.

Rajesh
On Sat, 12 Mar 2005, Diego Novillo wrote:
On 03/12/05 08:14, Rajesh Babu wrote:
I want the root node of the AST built by gcc, so that I can do the 
manipulations I want to do on the intermediate nodes of AST.  Can someone 
tell me where I can find the root node of AST and the place where the 
construction of AST finishes in GCC source.

If you are working in mainline or GCC 4.0, start with 
tree-optimize.c:init_tree_optimization_passes.  There's documentation about 
writing your own passes in doc/tree-ssa.texi.

Diego.


Re: How is lang.opt processed?

2005-03-12 Thread Mike Stump
On Friday, March 11, 2005, at 06:39  PM, Steve Kargl wrote:
What is even more appalling is that there is no way to inhibit the
swallowing of the options.
Sure there is, it is just a matter of code.  Check out --classpath and 
option_map for example in gcc.c.  Sure seems like it isn't harder than 
adding one line per option.  Would be even better if these were done by 
a lang fragment that then combined to form this table, but, you get the 
idea.



Re: How is lang.opt processed?

2005-03-12 Thread Steve Kargl
On Sat, Mar 12, 2005 at 10:08:18AM -0800, Mike Stump wrote:
> On Friday, March 11, 2005, at 06:39  PM, Steve Kargl wrote:
> >What is even more appalling is that there is no way to inhibit the
> >swallowing of the options.
> 
> Sure there is, it is just a matter of code.  Check out --classpath and 
> option_map for example in gcc.c.  Sure seems like it isn't harder than 
> adding one line per option.  Would be even better if these were done by 
> a lang fragment that then combined to form this table, but, you get the 
> idea.

If lang.opt is the canonical method used to declare language
specific option, then there should be a feature in parsing
lang.opt to override all other options.  For example,

i8
F95 override
Set the default integer kind to double precision

Expecting someone working on only the Fortran frontend
to check gcc.c (or any other source file) for obscure
documentation is stupid.

-- 
Steve


Re: How is lang.opt processed?

2005-03-12 Thread Mike Stump
On Saturday, March 12, 2005, at 10:43  AM, Steve Kargl wrote:
If lang.opt is the canonical method used to declare language
specific option, then there should be a feature in parsing
lang.opt to override all other options.
Hard to disagree with anything you said...


Re: __builtin_cpow((0,0),(0,0))

2005-03-12 Thread Paul Schlie
> From: Gabriel Dos Reis <[EMAIL PROTECTED]>
>> Paul Schlie <[EMAIL PROTECTED]> writes:
> | > Gabriel Dos Reis wrote:
> | > You probably noticed that in the polynomial expansion, you are using
> | > an integer power -- which everybody agrees on yield 1 at the limit.
> | >
> | > I'm tlaking about 0^0, when you look at the limit of function x^y
> | 
> | Out of curiosity, on what basis can one conclude:
> | 
> |  lim{|x|==|y|->0} x^y :: lim{|x|==|y|->0} (exp (* x (log y))) != 1 ?
> 
> The issue is not whether the limit of x^x, as x approaches 0, is 1 not.
> We all, mathematically, agree on that.
> 
> The issue is whether the *bivariate* function x^y has a defined limit
> at (0,0).  And the answer is unambiguously no.
> Checking just one path does NOT suffice to assrt that the limit
> exists. (However, that might suffice to assert that a limit does not
> exist). 
> 
> I'm deeply burried somewhere in the middle-west deserts and I have no
> much reliable connection, so I'll point you to the message
> 
> http://gcc.gnu.org/ml/gcc/2005-03/msg00469.html
> 
> where I've tried to taint this discussion with some realities from what
> standard bodies think on the 0^0 arithmetic, and conterexample you can
> check by yourself.
> 
> | As although it's logarithmic decomposition may yield intermediate complex
> | values, and may diverge prior to converging as they approach their limit,
> | it seems fairly obvious that the expression converges to the value of 1
> 
> You've transmuted the function x^y to the function x^x which is a
> different beast.  Existing of limit of the latter does not imply
> existance of limit of the former.  Again check the counterexamples in
> the message I referred to above.

Thank you. In essence, I've intentionally defined the question of x^y's
value about x=y->0 as a constrained "bivariate" function, to where only
the direction, not the relative rate of the argument's paths are ambiguous,
as I believe that when the numerical representation system has no provision
to express their relative rates of convergence, they should be assumed to be
equivalent; as the question of a functions value about any static point such
as (0,0) or (2,4) etc., is invalid unless that point is well defined within
it's arguments path; where if it is, then the constrained representation is
equally valid, but not otherwise (as nor is the question).

Therefore in other words, the question of an arbitrary function's value
about an arbitrary static point is just that, it's not a question about a
functions value about an arbitrary point which may or may not be intersected
by another function further constraining it's arguments.

Therefore the counter argument observing that x^y is ambiguous if further
constrained by y = k/ln(x), is essentially irrelevant; as the question is
what's the value of x^y, with no provision to express further constraints
on it's arguments. Just as the value of (x + y) if further constrained by
y = x, about the point (1,2) would be both ambiguous and an irrelevant to
the defined value of (x + y) about (1,2).

I believe things are being confused by a misinterpretation of the meaning
of what a limit about an infinite boundary truly means; as although most
understand that lim{x->1; y->2} implies convergence about the static valid
point (1,2) where it would be obvious that if x and y were further
constrained such that (1,2) were invalid, then so too would be the question;
just as lim{x->0; y->0} should be equally treated.  But it's being abused by
those who don't understand that just because all of a function's arguments
may approach a given set of values eventually, if they do not
simultaneously, then that set of values does not lie in the function's path,
therefore irrelevant; just as (0,0) does not lie in y = k/ln(x)'s path,
therefore an invalidating simultaneous constraint; as otherwise it would be
valid to argue that (0,0) lies on y = x, y = x + 1, y = x + 2, ...
simultaneously, which is more obviously false, making it more apparent that
it's important to differentiate the static points they imply, from the
infinite boundaries they simultaneously abstractly represent in the form of
0 ~ lim{->0} :: lim{->1/inf}, and inf ~ lim{->inf} :: lim{->1/->0}.

Very long story short, it seems clear that:

  f(a,b) :: lim{v->1/inf) f(a+/-v,b+/-v)

about any static point, when defined independently of any other arbitrary
constraints.




Re: Received welcome message.

2005-03-12 Thread Steve Kargl
On Sat, Mar 12, 2005 at 11:12:18AM +0800, Feng Wang wrote:
> Hello,
> 
> I have received the confirming mail for my application on "write after
> approval". Thanks, all.
> 
> p.s. Steve, I think I can commit the patch for PR18827 myself.
> If you reviewed, please notify me.
> 
> Best Regards,
> Feng Wang
> 

The patch is ok for both mainline and 4.0 branch
after you add appropriate comments as requested
by Paul.  See

http://gcc.gnu.org/ml/fortran/2005-03/msg00209.html

-- 
Steve


RFC: Changes in the representation of call clobbers

2005-03-12 Thread Diego Novillo

To represent call-clobbering effects we currently collect all the
call-clobbered symbols and make every call a definition site for each of
them.  For instance, given 3 global variables X, Y and Z:
foo()
{
  X = 3
  Y = 1
  Z = X + 1
  bar ()
  Y = X + 2
  return Y + Z
}
we put the three symbols in FUD chain form as follows:
foo()
{
   # X_2 = V_MUST_DEF 
   X = 3
   # Y_4 = V_MUST_DEF 
   Y = 1
   # Z_6 = V_MUST_DEF 
   # VUSE 
   Z = X + 1
   # X_3 = V_MAY_DEF 
   # Y_5 = V_MAY_DEF 
   # Z_7 = V_MAY_DEF 
   bar ()
   # VUSE 
   # Y_8 = V_MUST_DEF 
   Y = X + 2
   # VUSE 
   # VUSE 
   return Y + Z;
}
The real IL is more detailed, but that's the general idea.  You'll see
that the call to bar() generates one may-def for each call-clobbered
variable.  This prevents the optimizers from propagating known values
across the call.  For instance, the value '3' associated to 'X_2' cannot
be propagated across the call because Y = X + 2 uses 'X_3'.  However,
X_2 can be propagated into 'Z = X + 1', so the optimizers can change
that to 'Z = 4'.
However, if we had 50 call-clobbered variables, that call to bar() would
generate 50 may-defs.  This can quickly grow out of hand for real
programs, so we have a throttling mechanism which puts all the
call-clobbered variables inside the same bag to reduce the number of
virtual operands.
What we do is create a single symbol (.GLOBAL_VAR or GV) and then we
alias every call-clobbered variable to it.  So in this case, we end up with:
foo()
{
   # GV_2 = V_MAY_DEF 
   X = 3
   # GV_4 = V_MAY_DEF 
   Y = 1
   # GV_5 = V_MAY_DEF 
   Z = X + 1
   # GV_6 = V_MAY_DEF 
   bar ()
   # GV_7 = V_MAY_DEF 
   Y = X + 2
   # VUSE 
   return Y + Z;
}
So, we've reduced the number of virtual operands, but we can no longer 
make some transformations that we could before (we are no longer able to 
propagate 3 into Z = X + 1).

We discussed this briefly on IRC, Dan described briefly what the IBM
compiler does, but he thinks it may be too intrusive given GCC's
implementation of FUD chains.
Another idea is already documented in a FIXME note in
maybe_create_global_var.  Create separate GVs for each type alias set,
but that would not help in this case if all X, Y and Z are of the same type.
I've started thinking a little bit about this and perhaps we could do 
something along the lines of the idea that Jan proposed a few months 
(don't recall if it was a private or public message, Jan?).  Jan's 
original proposal drastically reduced the number of vops, but it had a 
similar restriction on the optimizers, things would be too interconnected.

The variation that I have in mind is to consider every load and store 
operation a *load* of GLOBAL_VAR.  Only call sites generate new names 
for GLOBAL_VAR.  Something like this:

foo()
{
   # X_2 = V_MUST_DEF 
   # VUSE 
   X = 3
   # Y_4 = V_MUST_DEF 
   # VUSE 
   Y = 1
   # Z_6 = V_MUST_DEF 
   # VUSE 
   # VUSE 
   Z = X + 1
   # GV_14 = V_MAY_DEF 
   bar ()
   # VUSE 
   # Y_8 = V_MAY_DEF 
   # VUSE 
   Y = X + 2
   # VUSE 
   # VUSE 
   # VUSE 
   return Y + Z;
}
Advantages of this scheme:
- It doesn't unnecessarily tie up the optimizers.  We can still 
propagate 3 into Z = X + 1 (as it loads X_2 and GV_13).

- Every call site will always generate exactly one V_MAY_DEF.
- We can naturally block propagations across call sites, notice how the 
load of X at 'Y = X + 2' loads from X_2 and GV_14 (the previous store 
had GV_13).

Disadvantages:
- It generates an additional vop per load/store.  In some cases that may 
generate more virtual operands than what we generate now.  Though I 
think that would be rare.

- Those VUSEs for GV at store sites look wonky.
- If there are no uses of X, Y and Z after the call to bar, DCE will 
think that those stores are dead.  We would have to hack DCE to somehow 
seeing the call to bar() as a user for those stores.

The last problem is the one I have more reservations about.  We could 
relate the stores to the call sites some other way outside the SSA web, 
but I don't much like any kind of implicit data flow relationships.

Thoughts?  Ideas?
Thanks.  Diego.


DR#236 update

2005-03-12 Thread Joseph S. Myers
The pre-Lillehammer WG14 mailing includes N 
 with an 
updated analysis of the DR#236 aliasing issues taking account of comments 
previously made on this list.

-- 
Joseph S. Myers   http://www.srcf.ucam.org/~jsm28/gcc/
[EMAIL PROTECTED] (personal mail)
[EMAIL PROTECTED] (CodeSourcery mail)
[EMAIL PROTECTED] (Bugzilla assignments and CCs)


Re: Feature request: Globalize symbol

2005-03-12 Thread Kai Henningsen
[EMAIL PROTECTED] (Richard Henderson)  wrote on 11.03.05 in <[EMAIL PROTECTED]>:

> On Fri, Mar 11, 2005 at 02:48:35AM +0100, Hans-Peter Nilsson wrote:
> > > Isn't a compiler option -fglobalize-symbol also a form of source-level
> > > instrumentation?  Either way, you need the source, and you get different
> > > code emitted.
> >
> > This isn't a source-level modification, by definition.
>
> For some definition of definition.
>
> I, for one, do not like the idea of this extension at all.
> Seems to me that if you have the source, you might as well
> modify it.  I see no particular reason to complicate things
> just to accomodate an aversion to using patch(3).

You have a library implementation of patch(1)? ;-)

Anyway, that seems to be very much the wrong tool to me. For stuff like  
thes, you'd really want a tool that understands C, so it can make a  
certain modification for certain syntactical places. You wouldn't want to  
implement -finstrument-functions with patch, either, would you?

MfG Kai


Re: __builtin_cpow((0,0),(0,0))

2005-03-12 Thread Kai Henningsen
[EMAIL PROTECTED] (Robert Dewar)  wrote on 07.03.05 in <[EMAIL PROTECTED]>:

> Ronny Peine wrote:
>
> > Sorry for this, maybe i should sleep :) (It's 2 o'clock here)
> > But as i know of 0^0 is defined as 1 in every lecture i had so far.
>
> Were these math classes, or CS classes.

Let's just say that this didn't happen in any of the German math classes I  
ever took, school or uni. This is in fact a classic example of this type  
of behaviour.

> Generally when you have a situation like this, where the value of
> the function is different depending on how you approach the limit,
> you prefer to simply say that the function is undefined at that
> point.

And that's how it was always taught to me.


This is, of course, a different question from what a library should  
implement ... though I must say if I were interested in NaNs at all for a  
given problem, I'd be disappointed by any such library that didn't return  
a NaN for 0^0, and of any language standard saying so - I'd certainly  
consider a result of 1 wrong in the general case.

MfG Kai


gcc-4.0-20050312 is now available

2005-03-12 Thread gccadmin
Snapshot gcc-4.0-20050312 is now available on
  ftp://gcc.gnu.org/pub/gcc/snapshots/4.0-20050312/
and on various mirrors, see http://gcc.gnu.org/mirrors.html for details.

This snapshot has been generated from the GCC 4.0 CVS branch
with the following options: -rgcc-ss-4_0-20050312 

You'll find:

gcc-4.0-20050312.tar.bz2  Complete GCC (includes all of below)

gcc-core-4.0-20050312.tar.bz2 C front end and core compiler

gcc-ada-4.0-20050312.tar.bz2  Ada front end and runtime

gcc-fortran-4.0-20050312.tar.bz2  Fortran front end and runtime

gcc-g++-4.0-20050312.tar.bz2  C++ front end and runtime

gcc-java-4.0-20050312.tar.bz2 Java front end and runtime

gcc-objc-4.0-20050312.tar.bz2 Objective-C front end and runtime

gcc-testsuite-4.0-20050312.tar.bz2The GCC testsuite

Diffs from 4.0-20050305 are available in the diffs/ subdirectory.

When a particular snapshot is ready for public consumption the LATEST-4.0
link is updated and a message is sent to the gcc list.  Please do not use
a snapshot before it has been announced that way.


Re: are link errors caused by mixing of versions?

2005-03-12 Thread James E Wilson
Michael Cieslinski wrote:
/usr/bin/ld: Warning: size of symbol
`ACE_At_Thread_Exit::~ACE_At_Thread_Exit()' changed from 46 in
.shobj/POSIX_Proactor.o to 48 in .shobj/Proactor.o
This looks like a destructor function name, which means two different 
versions of gcc generated different code for the same function, which is 
a common occurrence, and not anything to worry about.

`typeinfo name for ACE_Sbrk_Memory_Pool' referenced in section
`.gnu.linkonce.d._ZTI20ACE_Sbrk_Memory_Pool[typeinfo for
ACE_Sbrk_Memory_Pool]' of .shobj/Local_Name_Space.o: defined in discarded
section `.gnu.linkonce.r._ZTS20ACE_Sbrk_Memory_Pool[typeinfo name for
ACE_Sbrk_Memory_Pool]' of .shobj/Priority_Reactor.o
This means we have a link-once data section that has a reference to a 
link-once read-only data section, and the read-only data section was 
deleted by the linker as unused.  This is not good, but it isn't clear 
if the mixing of compiler versions had anything to do with this.  It is 
possible that some ABI change got implemented after the code-freeze was 
lifted, thus causing the incompatibility.  Or this could be a bug in the 
application.  Or this could be a lot of other things.  We would need 
more info.

`vtable for ACE_Sig_Adapter' referenced in section
`.gnu.linkonce.t._ZN15ACE_Sig_AdapterD0Ev[ACE_Sig_Adapter::~ACE_Sig_Adapter()]'
of .shobj/Local_Name_Space.o: defined in discarded section
`.gnu.linkonce.d._ZTV15ACE_Sig_Adapter[vtable for ACE_Sig_Adapter]' of
.shobj/POSIX_Proactor.o
This is a similar case, except that we have a link-once text section 
referencing a symbol in a link-once data section, and the data section 
was deleted as unused.

/usr/bin/ld: BFD 2.15.91.0.2 20040727 internal error, aborting at
../../bfd/elf64-x86-64.c line 1873 in elf64_x86_64_relocate_section
This is a binutils bug.  The linker shouldn't die like this, even if the 
compiler created bogus output.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com


Re: Target specific memory attributes from RTL

2005-03-12 Thread James E Wilson
Balaji S wrote:
_On 11-Mar-2005 02:48, James E Wilson san wrote_:
Is expression evaluation (expr.c, expand_expr_real) converting tree into 
RTL, the right place to extend GCC as required?
Basically, yes.  However, variables declarations are typically handled 
separately from the expression, so if you want to retain info about 
variables, you need to do it elsewhere, when the variable's DECL_RTL is 
generated.  Globals are handled differently than locals, but probably 
only globals matter, in which case you should see make_decl_rtl and 
encode_section_info.

Also, note, in current sources, this stuff is all different, because the 
tree to RTL expander has been replaced with tree to gimple to RTL 
expander.  So what you are doing is probably of no long term use.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com


Re: Merging calls to `abort'

2005-03-12 Thread James E Wilson
Richard Stallman wrote:
Currently, I believe, GCC combines various calls to abort in a single
function, because it knows that none of them returns.
To give this request a little context, consider the attached example. 
If I compile this with -O2 -g, and run it under the debugger, it tells 
me that the program died at line 7 (j == 0) which is impossible.  It is 
easy to tell what went wrong in a trivial example like this, but in a 
big program like gcc this can be very confusing.  This reduces the 
usefulness of using abort in combination with -g and optimization.

The optimization that causes the problem is crossjumping.  If I compile 
with -O2 -fno-crossjumping I get the desired result.  So rms's request 
is essentially to disable crossjumping of abort calls.

Incidentally, we have already pushed crossjumping back from -O to -O2. 
If I use gcc-3.3, the example fails with -O -g.  But with gcc-4, it 
works at -O -g and fails only at -O2 (or -Os) -g.  Since -O -g works 
with current sources, perhaps this is a good enough solution.  Or 
perhaps just asking people to use -fno-crossjumping is a good enough 
solution.

Otherwise, we need to consider the merits of disabling an optimization 
to make debugging easier.  This is a difficult choice to make, but at 
-O2, I'd prefer that we optimize, and suggest other debugging techniques 
intead of relying on the line numbers of abort calls.  Such as using 
assert instead.

I suppose there is yet another alternative, which is to extend the debug 
info to indicate that the single abort call is both line 5 and line 7, 
and then extend gdb to handle this gracefully, but this does not seem to 
be a practical solution, at least not a practical short term one.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com


Re: Merging calls to `abort'

2005-03-12 Thread James E Wilson
James E Wilson wrote:
To give this request a little context, consider the attached example.
This time actually attached.
--
Jim Wilson, GNU Tools Support, http://www.SpecifixInc.com
int
sub (int i, int j)
{
  if (i == 0)
abort ();
  else if (j == 0)
abort ();
  else
return i * j;
}

int
main(void)
{
  return sub (0, 10);
}


Re: Merging calls to `abort'

2005-03-12 Thread Steven Bosscher
On Sunday 13 March 2005 02:07, James E Wilson wrote:
> Richard Stallman wrote:
> > Currently, I believe, GCC combines various calls to abort in a single
> > function, because it knows that none of them returns.
>
> To give this request a little context, consider the attached example.

May I recommend KMail, the mailer that complains when you say you
attached something, and you didn't?  :-)

> Otherwise, we need to consider the merits of disabling an optimization
> to make debugging easier.  This is a difficult choice to make, but at
> -O2, I'd prefer that we optimize, and suggest other debugging techniques
> intead of relying on the line numbers of abort calls.  Such as using
> assert instead.

Right.  Really, abort() is just the wrong function to use if you
want to know *where* a problem occured.  GCC uses this fancy_abort
define:

system.h:#define abort() fancy_abort (__FILE__, __LINE__, __FUNCTION__)

where fancy_abort() is a, well, fancy abort that prints some more
information about what happened, and where.  IMVHO any moderately
large piece of software that uses abort should consider using this
kind of construct, or use assert().  Crippling optimizers around
abort is just silly.  It's attacking a real problem from the wrong
end.  The real problem is that abort() is just not detailed enough.

Gr.
Steven



Re: __builtin_cpow((0,0),(0,0))

2005-03-12 Thread Gabriel Dos Reis
Paul Schlie <[EMAIL PROTECTED]> writes:

[...]

| > You've transmuted the function x^y to the function x^x which is a
| > different beast.  Existing of limit of the latter does not imply
| > existance of limit of the former.  Again check the counterexamples in
| > the message I referred to above.
| 
| Thank you. In essence, I've intentionally defined the question of x^y's
| value about x=y->0 as a constrained "bivariate" function, to where only
| the direction, not the relative rate of the argument's paths are ambiguous,
| as I believe that when the numerical representation system has no provision
| to express their relative rates of convergence, they should be assumed to be
| equivalent;

You're seriously mistaken.  In lack of any further knowledge, one should not
assume anything particular.  Which is reflected in LIA-2's rationale.
You just don't know anything about the rate of the arguments.

| as the question of a functions value about any static point such
| as (0,0) or (2,4) etc., is invalid unless that point is well defined within
| it's arguments path; where if it is, then the constrained representation is
| equally valid, but not otherwise (as nor is the question).
| 
| Therefore in other words, the question of an arbitrary function's value
| about an arbitrary static point is just that, it's not a question about a
| functions value about an arbitrary point which may or may not be intersected
| by another function further constraining it's arguments.
| 
| Therefore the counter argument observing that x^y is ambiguous if further
| constrained by y = k/ln(x), is essentially irrelevant; as the question is

That was just *one* set of counterexample.  It is very relevant to 
the complexity of the issue. 

| what's the value of x^y, with no provision to express further constraints
| on it's arguments. Just as the value of (x + y) if further constrained by
| y = x, about the point (1,2) would be both ambiguous and an irrelevant to
| the defined value of (x + y) about (1,2).

You comparing apple and oranges.  "+" is continuous at any point.  "^"
is not.  That is the core issue.

-- Gaby



Re: Question about "#pragma pack(n)" and "-fpack-struct"

2005-03-12 Thread feng qiu

James E Wilson wrote:
-fpack-struct and #pragma pack(2) are contraditctory instructions, and
-fpack-struct wins.   It was never the intent to allow both.  Current
gcc sources will give a warning saying that the pragma pack is being
ignored.  If you want some structures that aren't fully packed, then you
can't use -fpack-struct.
--
Is it difficult to modify the gcc sources if I want to use the both?
Thanks in advanced,
Feng Qiu
_
享用世界上最大的电子邮件系统― MSN Hotmail。  http://www.hotmail.com  



Non-bootstrap build status reports

2005-03-12 Thread Aaron W. LaFramboise
Is there a reason why non-bootstrap build status reports are not
archived?  For example, for the many targets that are only used in cross
configurations, it would be nice to see if they are working.

Also, it might be nice to have a record of negative build reports.  For
instance, the build status page might have section for negative builds
listing reports of failed builds that might serve as a quick means to
determine the health of a broken port.


Aaron W. LaFramboise



Re: __builtin_cpow((0,0),(0,0))

2005-03-12 Thread Paul Schlie
> From: Gabriel Dos Reis <[EMAIL PROTECTED]>
> |Paul Schlie <[EMAIL PROTECTED]> writes:
> | Thank you. In essence, I've intentionally defined the question of x^y's
> | value about x=y->0 as a constrained "bivariate" function, to where only
> | the direction, not the relative rate of the argument's paths are ambiguous,
> | as I believe that when the numerical representation system has no provision
> | to express their relative rates of convergence, they should be assumed to be
> | equivalent;
> 
> You're seriously mistaken.  In lack of any further knowledge, one should not
> assume anything particular.  Which is reflected in LIA-2's rationale.
> You just don't know anything about the rate of the arguments.

I guess I'd simply contend that the value of a function about any point
in the absents of further formal constraints should be assumed to represent
it's static value about that point i.e. lim{|v|->1/inf) f(x+v, y+v, ...)

And reserve the obligation for applications requiring the calculation
of formally parameterized multi-variate functions at boundary limits to
themselves; rather than burdening either uses of such functions with
arguably less useful Nan results.

But understand, that regardless of my own opinion; it's likely more
important that a function produces predicable results, regardless of
their usefulness on occasion. (which is the obligation of the committees
to hopefully decide well)