Re: Feature request: Globalize symbol

2005-03-11 Thread Kai Henningsen
[EMAIL PROTECTED] (James E Wilson)  wrote on 10.03.05 in <[EMAIL PROTECTED]>:

> On Thu, 2005-03-10 at 17:48, Hans-Peter Nilsson wrote:
> > This isn't a source-level modification, by definition.
>
> And I could argue that my suggestion isn't a source-level modification
> either, or I could argue that your suggestion really is a source-level
> modification, but it seems pointless to argue about this.

I'm not sure it's pointless when it's about fundamental vocabulary.

I, and presumably Hans-Peter too, think a source-level modification is  
pretty much defined by taking an editor to the source file. If you can use  
the unchanged source file, then it's not a source-level modification.

MfG Kai


Re: Feature request: Globalize symbol

2005-03-12 Thread Kai Henningsen
[EMAIL PROTECTED] (Richard Henderson)  wrote on 11.03.05 in <[EMAIL PROTECTED]>:

> On Fri, Mar 11, 2005 at 02:48:35AM +0100, Hans-Peter Nilsson wrote:
> > > Isn't a compiler option -fglobalize-symbol also a form of source-level
> > > instrumentation?  Either way, you need the source, and you get different
> > > code emitted.
> >
> > This isn't a source-level modification, by definition.
>
> For some definition of definition.
>
> I, for one, do not like the idea of this extension at all.
> Seems to me that if you have the source, you might as well
> modify it.  I see no particular reason to complicate things
> just to accomodate an aversion to using patch(3).

You have a library implementation of patch(1)? ;-)

Anyway, that seems to be very much the wrong tool to me. For stuff like  
thes, you'd really want a tool that understands C, so it can make a  
certain modification for certain syntactical places. You wouldn't want to  
implement -finstrument-functions with patch, either, would you?

MfG Kai


Re: __builtin_cpow((0,0),(0,0))

2005-03-12 Thread Kai Henningsen
[EMAIL PROTECTED] (Robert Dewar)  wrote on 07.03.05 in <[EMAIL PROTECTED]>:

> Ronny Peine wrote:
>
> > Sorry for this, maybe i should sleep :) (It's 2 o'clock here)
> > But as i know of 0^0 is defined as 1 in every lecture i had so far.
>
> Were these math classes, or CS classes.

Let's just say that this didn't happen in any of the German math classes I  
ever took, school or uni. This is in fact a classic example of this type  
of behaviour.

> Generally when you have a situation like this, where the value of
> the function is different depending on how you approach the limit,
> you prefer to simply say that the function is undefined at that
> point.

And that's how it was always taught to me.


This is, of course, a different question from what a library should  
implement ... though I must say if I were interested in NaNs at all for a  
given problem, I'd be disappointed by any such library that didn't return  
a NaN for 0^0, and of any language standard saying so - I'd certainly  
consider a result of 1 wrong in the general case.

MfG Kai


Re: Merging calls to `abort'

2005-03-13 Thread Kai Henningsen
[EMAIL PROTECTED] (Steven Bosscher)  wrote on 13.03.05 in <[EMAIL PROTECTED]>:

> On Sunday 13 March 2005 02:07, James E Wilson wrote:
> > Richard Stallman wrote:
> > > Currently, I believe, GCC combines various calls to abort in a single
> > > function, because it knows that none of them returns.
> >
> > To give this request a little context, consider the attached example.
>
> May I recommend KMail, the mailer that complains when you say you
> attached something, and you didn't?  :-)
>
> > Otherwise, we need to consider the merits of disabling an optimization
> > to make debugging easier.  This is a difficult choice to make, but at
> > -O2, I'd prefer that we optimize, and suggest other debugging techniques
> > intead of relying on the line numbers of abort calls.  Such as using
> > assert instead.
>
> Right.  Really, abort() is just the wrong function to use if you
> want to know *where* a problem occured.  GCC uses this fancy_abort
> define:
>
> system.h:#define abort() fancy_abort (__FILE__, __LINE__, __FUNCTION__)
>
> where fancy_abort() is a, well, fancy abort that prints some more
> information about what happened, and where.  IMVHO any moderately
> large piece of software that uses abort should consider using this
> kind of construct, or use assert().  Crippling optimizers around
> abort is just silly.  It's attacking a real problem from the wrong
> end.  The real problem is that abort() is just not detailed enough.

For what little it's worth, I agree completely. In any significant C- 
family project, I always have macros to handle these things - usually  
significantly more involved that your example.

In a certain Objective-C project, I even have Perl-generated  
Exceptions.[hm] so a number of different error situations can be handled  
by similar syntax. For example, for errno-setting cases, I have

extern NSString *OSException;
extern NSString *NameOSError(int Eerrno);
extern void RaiseOSException(int Eerrno, const char *Efile, int Eline,
const char *Efunc, const char *Econd, NSString *Efmt, ...);
#define RAISE_OS_R(_cond, _fmt, _info...) do { int _err = errno; \
RaiseOSException(_err, __FILE__, __LINE__, __PRETTY_FUNCTION__, \
#_cond, _fmt ,##_info); } while (0)
#define RAISE_OS_I(_cond, _fmt, _info...) do { if (_cond) \
RAISE_OS_R(_cond, _fmt ,##_info); } while (0)

so I can then write

RAISE_OS_I(fseeko(AdrF, 0, SEEK_END) != 0, AdrFName);

or

Recs = fopen(StringToCP(Data, CP_fs), "r+b");
if (!Recs && errno == ENOENT) {
  ...
}
else {
if (!Recs)
  RAISE_OS_R("fopen(StringToCP(Data, CP_fs), \"r+b\")", Data);
...
}

or even, if the situation doesn't call for an exception,

[errors addObject: [NSString stringWithFormat: @"%@: error opening: %@",  
fd->name, NameOSError(err)]];


And yes, ultimately, if the exception isn't caught before, there will be  
an abort():

static void __attribute__((__noreturn__)) Panic(const char *f, int l)
{
Cprintf("<-\n\rPanic at %s:%d\n\r", f, l);
fflush(stderr);
shutdownConsole();
abort();
}

... with another reference to what logic decided not to continue after  
all.

Most of the rest of the error handling in this project is concerned with  
the absence of the feature I loved in the IBM PL/I compilers under the  
name "SNAP;" - putting out a stack backtrace (the usual idiom for abort()  
was "SNAP; STOP;" IIRC). Now *that* would be a useful feature to have.  
Seems to me these days it shouldn't be all that hard based on dwarf2 debug  
info ... too bad I still am in 2.95.4 land. I will probably try to go  
straight to 4.0 or 4.1 one of these days, depending on how reliable it  
seems and how much time I get - there are a number of features I'd like to  
have. Possibly that could also help my home-grown profiling code (which  
works for threaded code under both Linux and Windows, which I could get  
none of the "standard" versions to do - triggering off -finstrument- 
functions - but doesn't give as nice info as gprof).

MfG Kai


Re: Use Bohem's GC for compiler proper in 4.1?

2005-04-03 Thread Kai Henningsen
[EMAIL PROTECTED] (Gabriel Dos Reis)  wrote on 02.04.05 in <[EMAIL PROTECTED]>:

> While I know a bit of third-wrld, I have also been working in some western
> European countries for a sufficiant time to say that, well, far many real
> machines used there for work in univeristies and research labs still
> don't go beyond 512Mb memory; and they really would love to use GCC and GCC
> should be usable on those machines.

Not just Europe either, if you go even a little bit higher. I occasionally  
go looking for possible motherboards, and surprisingly many aren't  
expandable beyond two gigabytes; and even 64bit ones often seem to be  
limited at or under four! I've one gig today, so I'd like a replacement  
that can take more than two ...

MfG Kai


Re: RFC: #pragma optimization_level

2005-04-03 Thread Kai Henningsen
[EMAIL PROTECTED] (Mark Mitchell)  wrote on 01.04.05 in <[EMAIL PROTECTED]>:

> In fact, I've long said that GCC had too many knobs.
>
> (For example, I just had a discussion with a customer where I explained
> that the various optimization passes, while theoretically orthogonal,
> are not entirely orthogonal in practice, and that truning on another
> pass (GCSE, in this caes) avoided other bugs.  For that reason, I'm not
> actually convinced that all the -f options for turning on and off passes
> are useful for end-users, although they are clearly useful for debugging
> the compiler itself.  I think we might have more satisfied users if we
> simply had -Os, -O0, ..., -O3.  However, many people in the GCC
> community itself, and in certain other vocal areas of the user base, do
> not agree.)

Well, yes and no - I sometimes think that gcc doesn't have *enough* knobs.

But, really, not all knobs are equal.

There are classes of knobs that implement a certain kind of control (for  
example, the -Wxxx flags).

There are too many knobs in the sense that, for historical reasons, these  
seemingly-similar knobs often work in subtly different ways.

And there are not enough knobs in the sense that these knobs do not cover  
the whole spectrum that one would want covered - again, mostly for  
historical reasons.

Regularization would be a way to get at both problems at the same time. A  
simple regular class of knobs can be handled much better by actual people  
than a somewhat smaller, irregular one - there are fewer rules to  
remember.

To stay with the warning options example, some options are parts of others  
in nonobvious ways; some rather different warnings are only controlled by  
one option; some warnings have no option to control them. It's *this*  
complexity that is the real problem here, I believe.

If every warning was accompanied by some (presumably not translated) tag,  
and there was an option to enable or disable all warnings with that tag,  
for all tags, that would be much easier to handle, *and* would cover more  
functionality at the same place. *And* you could actually try to do a  
complete warning catalogue for the docs, sorted by tag, and it would still  
work if the warnings themselves were translated and the docs (a much  
bigger job) weren't. Oh, and you'd have more success googling for that  
warning, too.

MfG Kai


Re: Use Bohem's GC for compiler proper in 4.1?

2005-04-03 Thread Kai Henningsen
[EMAIL PROTECTED] (Mike Stump)  wrote on 01.04.05 in <[EMAIL PROTECTED]>:

> On Friday, April 1, 2005, at 08:48  AM, Stefan Strasser wrote:
> >  if gcc uses more memory than physically available it spends a _very_
> > long time swapping
>
> Swapping, what's that?  Here's $20, go buy a gigabyte.

$20? That does not seem to correspond to current prices:

 *  PC100 SDRAM 512MB   ab 80.50 EUR
 *  PC133 SDRAM 512MB   ab 38.89 EUR
 *  PC266 DDRRAM 1GBab 86.90 EUR
 *  PC266 DDRRAM 512MB  ab 38.00 EUR
 *  PC333 DDRRAM 1GBab 93.00 EUR
 *  PC333 DDRRAM 512MB  ab 34.99 EUR
 *  PC400 DDRAM 512MB   ab 32.58 EUR
 *  PC400 DDRRAM 1GBab 84.00 EUR
 *  PC433 DDRAM 512MB   ab 85.00 EUR
 *  PC433 DDRRAM 1GBab 184.00 EUR
 *  PC466 DDRAM 512MB   ab 56.92 EUR
 *  PC466 DDRRAM 1GBab 116.88 EUR
 *  PC667 DDR2RAM 512MB ab 118.44 EUR
 *  PC800 RDRAM 1GB ab 570.00 EUR
 *  PC800 RDRAM 512MB   ab 147.49 EUR

But where do I stick that gigabyte? All my memory slots are in use with  
the current gigabyte. (And yes, this machine *does* swap:

 total   used   free sharedbuffers cached
Mem:   1030916 978872  52044  0 301948 429344
-/+ buffers/cache: 247580 783336
Swap:  5123072 5534044569668

)
That's while *not* bootstrapping. Bootstrap is *slow*.

processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 6
model   : 6
model name  : AMD Athlon(tm) Processor
stepping: 2
cpu MHz : 1145.142
cache size  : 256 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 1
wp  : yes
flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca  
cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow
bogomips: 2287.20

Not an old 486 here.

MfG Kai


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-01 Thread Kai Henningsen
[EMAIL PROTECTED] (Andrew Haley)  wrote on 30.04.05 in <[EMAIL PROTECTED]>:

> Matt Thomas writes:
>  > Joe Buck wrote:
>  > > I think you need to talk to the binutils people.  It should be possible
>  > > to make ar and ld more memory-efficient.
>  >
>  > Even though systems maybe demand paged, having super large
>  > libraries that consume lots of address space can be a problem.
>  >
>  > I'd like to libjava be split into multiple shared libraries.  In C,
>  > we have libc, libm, libpthread, etc.  In X11, there's X11, Xt, etc.
>  > So why does java have everything in one shared library?  Could the
>  > swing stuff be moved to its own?  Are there other logical
>  > divisions?
>
> It might be nice, certainly.  However, there are some surprising
> dependencies between parts of the Java library, and these would cause
> separate shared libraries to depend on each other, negating most of
> the advantage of separation.
>
> We are in the process of rewriting the Java ABI so that sumbol
> resolution in libraries is done lazily rather than eagerly.  This will
> help.  Even so, I would prefer to divide libjava -- if it is to be
> divided -- on a logical basis rather than simply in order to make
> libraries smaller.

Surely the other mentioned library divisions (libc, X) were *also* done on  
a logical basis?!

MfG Kai


Re: volatile semantics

2005-05-05 Thread Kai Henningsen
[EMAIL PROTECTED] (Nathan Sidwell)  wrote on 03.05.05 in <[EMAIL PROTECTED]>:

> Mike Stump wrote:
> > int avail;
> > int main() {
> >   while (*(volatile int *)&avail == 0)
> > continue;
> >   return 0;
> > }
> >
> >
> > Ok, so, the question is, should gcc produce code that infinitely  loops,
> > or should it be obligated to actually fetch from memory?   Hint, 3.3
> > fetched.
>
> I beleive the compiler is so licensed. [5.1.2.3/2] talks about accessing
> a volatile object.  If the compiled can determine the actual object
> being accessed through a series of pointer and volatile cast conversions,
> then I see nothing in the std saying it must behave as-if the object
> were volatile when it is not.
>
> This, of course, might not be useful to users :)

As a QOI issue, it would be nice if such a situation caused a warning  
("ignoring volatile cast ..." or something like that).

It's rather dangerous to have the user believe that this worked as  
intended when it didn't.

MfG Kai


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-21 Thread Kai Henningsen
[EMAIL PROTECTED] (Peter Barada)  wrote on 17.05.05 in <[EMAIL PROTECTED]>:

> Its a 266Mhz ColdFire v4e machine, about 263 BogoMips, 1/20 the
> BogoMips of my workstation, and with an NFS rootfs, it gets network

BogoMips are called BogoMips because they are not comparable among  
different CPUs. All they measure is how often the CPU needs to run a  
particular near-empty loop to delay a certain time.

There usually is a small factor which can convert between BogoMips and CPU  
MHz for every CPU model. It would seem to be 1 for your ColdFire; it  
happens to be 1/2 for my Athlon (bogomips: 2287.20, cpu MHz: 1145.142).

Comparisions like yours are worse than meaningless.

MfG Kai


Re: Compiling GCC with g++: a report

2005-05-24 Thread Kai Henningsen
[EMAIL PROTECTED] (Mark Mitchell)  wrote on 23.05.05 in <[EMAIL PROTECTED]>:

> Zack Weinberg wrote:
> > Mark Mitchell <[EMAIL PROTECTED]> writes:
> >
> > [snip stuff addressed elsewhere]
> >
> >>I agree with the goal of more hiding.
> >>
> >>You can do this in C by using an incomplete structure type in most
> >>places, and then, in the files where you want the definition visible,
> >>defining the structure to have a single field of the enumerated
> >>type. That is a little messy, but it is C++-compatible.  (In fact, in
> >>ISO C++, without the additions presently in the WP, you can't do
> >>better; forward declarations of enums are still not allowed.)
> >
> >
> > Doesn't work, at least not as a drop-in replacement; you can't pass an
> > incomplete structure by value.  We do do this in places where there's
> > a real structure that can be passed around by pointer...
>
> Good point; yes, you would have to pass a pointer.  I guess you could
> create a singleton representative of each value in the enum, and pass
> them around, but I agree that's getting pretty ugly.  Of course, the
> problem with "unsigned int" is that it is a complete type, and people
> can accidentally pass in "7", even if there's no such enumeral.  You
> really want forward-declared enums, but you haven't got them; it may be
> you just lose. :-(

What I've done, in a similar situation, was to declare a complete  
structure encapsulating the value - this at least makes sure you need to  
acknowledge the structure whenever you access the value. Plus, I've added  
inline functions for accessing the value, so those places don't need to  
know the structure details either.

This makes it fairly type safe, and you can grep for all kinds of uses  
(including people who naughtily access the structure contents directly).

MfG Kai


Re: Sine and Cosine Accuracy

2005-05-28 Thread Kai Henningsen
[EMAIL PROTECTED] (Scott Robert Ladd)  wrote on 26.05.05 in <[EMAIL PROTECTED]>:

> Paul Koning wrote:
> >  Scott> Yes, but within the defined mathematical ranges for sine and
> >  Scott> cosine -- [0, 2 * PI) -- the processor intrinsics are quite
> >  Scott> accurate.

> I *said* that such statements are outside the standard range of
> trigonometric identities. Writing sin(100) is not a matter of necessity,

Actually, no, you did not say that.

You *said* "defined mathematical ranges". See above.

Which is just very, very wrong.

MfG Kai


Re: Sine and Cosine Accuracy

2005-05-28 Thread Kai Henningsen
[EMAIL PROTECTED] (Richard Henderson)  wrote on 26.05.05 in <[EMAIL PROTECTED]>:

> On Thu, May 26, 2005 at 10:34:14AM -0400, Scott Robert Ladd wrote:
> > static const double range = PI; // * 2.0;
> > static const double incr  = PI / 100.0;
>
> The trig insns fail with large numbers; an argument
> reduction loop is required with their use.

Are you claiming that [-PI ... PI] counts as "large numbers"?

MfG Kai


Re: GCC 4.1: Buildable on GHz machines only?

2005-05-28 Thread Kai Henningsen
[EMAIL PROTECTED] (Peter Barada)  wrote on 21.05.05 in <[EMAIL PROTECTED]>:

> >> Its a 266Mhz ColdFire v4e machine, about 263 BogoMips, 1/20 the
> >> BogoMips of my workstation, and with an NFS rootfs, it gets network
> >
> >BogoMips are called BogoMips because they are not comparable among
> >different CPUs. All they measure is how often the CPU needs to run a
> >particular near-empty loop to delay a certain time.
>
> I know exactly what a BogoMips is.

But it seems you ignore what it means.

> >There usually is a small factor which can convert between BogoMips and CPU
> >MHz for every CPU model. It would seem to be 1 for your ColdFire; it
> >happens to be 1/2 for my Athlon (bogomips: 2287.20, cpu MHz: 1145.142).
> >
> >Comparisions like yours are worse than meaningless.
>
> I wouldn't call it meaningless.  I don't have other benchmark numbers

It's exactly as meaningful - slightly less, in fact - as just quoting the  
MHz of the chip.

It doesn't tell anything interesting about what the chip can actually do  
with those MHz.

> for the chip, and it was menat to show that it isn't a blazingly fast
> processor (as compared to desktop machines).

So quote the MHz and be done with it.


MfG Kai


Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-05-29 Thread Kai Henningsen
[EMAIL PROTECTED] (Scott Robert Ladd)  wrote on 28.05.05 in <[EMAIL PROTECTED]>:

> Uros Bizjak wrote:
> > At this point, I wonder what is wrong with Bugzilla, that those
> > programmers don't fill a proper bug report.
>
> In my experience, people don't file Bugzilla reports because it feels
> impersonal and unresponsive. The form is not very user-friendly (as in
> friendly to users of GCC, not its developers.)

This is pretty much incomprehensible to me (NOT a gcc developer, but a gcc  
user - that is, a programmer).

"Feels impersonal"? And this is supposed to be a *problem* with a bug  
reporting system?!

We're not talking about a trouble ticket system for a matchmaking agency  
here, are we? I certainly don't expect technology in general to feel  
personal. And in fact I thought the problem with the mailing lists *was*  
that they got too personal.

Unresponsive? I thought the whole point was to avoid responses (in the  
mailing list)?

Sorry, but this really does not make any sense to me.

As for the user friendlyness of the forms, well, all I can say is that  
they're certainly not optimal, but they're at least in the upper 30% of  
forms in general one encounters on the web - and that includes non- 
technical stuff. In fact, forms for non-technical stuff are usually  
especially bad - presumably because the authors have less understanding of  
the technology involved.

MfG Kai


Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-05-30 Thread Kai Henningsen
[EMAIL PROTECTED] (William Beebe)  wrote on 29.05.05 in <[EMAIL PROTECTED]>:

> On 29 May 2005 11:37:00 +0200, Kai Henningsen <[EMAIL PROTECTED]>
> > wrote: [EMAIL PROTECTED] (Scott Robert Ladd)  wrote on 28.05.05
> > > in In my experience, people don't file Bugzilla reports because it feels
> > > impersonal and unresponsive. The form is not very user-friendly (as in
> > > friendly to users of GCC, not its developers.)
> >
> > This is pretty much incomprehensible to me (NOT a gcc developer, but a gcc
> > user - that is, a programmer).

[...]

> OK, then let me explain it to you. The problem with the GCC Bugzilla
> reporting system is that it's a system that only other developers can
> tolerate, let alone love.

But then, users of GCC typically *are* developers themselves, no?

>The entire GCC website (of which GCC
> Bugzilla is a part) could be the poster child for why developers
> should never be allowed to design user interfaces, especially web user
> interfaces. I'm sure I'll get flamed for wanting style over substance
> or about the proliferation of eye candy, but the GCC web site and it's

... which I think are poster childs why non-technical people *usually*  
ought not to be allowed to design web sites.

> attendent support pages can only be charitably described as eye trash.
> Yes, you can find the bug link if you read the main page long enough
> and move down the page slowly enough, or if, like me, you fire up
> Firefox's find and use that to go quickly to the bug link. But that's
> beside the point. At the very least the design of the GCC web site
> makes the whole project look like someone who has just discovered the
> web and decided to build a few pages. And please don't harp on making

To me, it looks *very* professional.

> it standards compliant and viewable by every browser in existance.
> There are plenty of well-designed and excellent sites that follow
> those same rules. You just need to be willing to put in the effort to
> look a little more professional and polished. And just to stir the pot

 that effort has pretty obviously been put in.

> further, a web site is an important marketing tool. It's the first
> thing that a lot of people will see when they go looking for help. If
> you want to have more people participate, then make the tools more
> inviting.

I think you're pretty much off by 180 degrees.

MfG Kai


Re: Fixing Bugs

2005-06-16 Thread Kai Henningsen
[EMAIL PROTECTED] (Jonathan Wakely)  wrote on 16.06.05 in <[EMAIL PROTECTED]>:

> On Thu, Jun 16, 2005 at 10:30:03AM -0400, Scott Robert Ladd wrote:
>
> > Aaron W. LaFramboise wrote:
> > > Boosters, FreeBSD hackers, and I'm sure tons of others are calling this
> > > the "Bicycle shed effect."
> > >
> > >  > > -PAINTING>
> >
> > If I'm building a bicycle shed, I may want to talk with others who have
> > done so in the past, learning from the experience and gaining their
> > insights. Why did they use a certain type of construction? What sort of
> > storage did they build in? What worked and didn't work for someone else
> > who has already built a shed? What did they learn from their own work?
> > Any shed I build will be better for such discussions.
>
> You've completely missed the point (rather appropriately).  You're
> *supposed* to be building a nuclear reactor, but have got preoccupied
> discussing the colour of the bicycle shed in which the staff will park
> their bikes.
>
> That's what "bicycle shed painting" refers to.

Actually, based on that FAQ entry, no, it doesn't.

Those are two different projects. It's not a shed as part of a reactor.

It's *not* about being distracted by unimportant details; it's about  
discussing easy projects to death and just rubberstamping hard projects.

Which does not seem to describe any current problem.

MfG Kai


Re: basic VRP min/max range overflow question

2005-06-19 Thread Kai Henningsen
[EMAIL PROTECTED] (Florian Weimer)  wrote on 18.06.05 in <[EMAIL PROTECTED]>:

> * Paul Schlie:
>
> > So in effect the standard committee have chosen to allow any program which
> > invokes any undefined behavior to behave arbitrarily without diagnosis?
> >
> > This is a good thing?
>
> It's the way things are.  There isn't a real market for
> bounds-checking C compilers, for example, which offer well-defined
> semantics even for completely botched pointer arithmetic and pointer
> dereference.
>
> C isn't a programming language which protects its own abstractions
> (unlike Java, or certain Ada or Common Lisp subsets), and C never was
> intended to work this way.  Consequently, the committee was right to
> deal with undefined behavior in the way it did.  Otherwise, the
> industry would have adopted C as we know it, and the ISO C standard
> would have had the same relevance as, say, the ISO Pascal on the
> evolution of Pascal.

And yet, languages like ISO Pascal *still* define undefined behaviour  
pretty much the same way C does. They just choose a different set of  
situations (well, there *is* overlap) to apply it to.

Pretty much all languages which allow access to "the bare metal" need this  
escape clause, because making a program safe in that context pretty much  
requires the compiler to solve the halting problem or equivalent - and we  
can't do that. The alternative is putting enough padding between the  
program and the metal to enable the compiler and runtime system  
significantly more control - and, of course, giving the programmer  
significantly *less* control. For example, there won't be any type-punning  
in such a language. It's a very different trade-off.

MfG Kai


Re: basic VRP min/max range overflow question

2005-06-19 Thread Kai Henningsen
[EMAIL PROTECTED] (Robert Dewar)  wrote on 18.06.05 in <[EMAIL PROTECTED]>:

> Here is an interesting example I have used sometimes to indicate just
> how this kind of information can propagate in a manner that would result
> in unexpected chaos. (Ada but obvious analogies in other languages)
>
>
>-- process command to delete system disk, check password first
>
>loop
>   read (password)
>   if password = expected_password then
> delete_system_disk;
>   else
> complain_about_bad_password;
> npassword_attempts := npassword_attempts + 1;
> if npassword_attempts = 4 then
>abort_execution;
> end if;
>   end if;
>end loop;
>
> Now suppose that npassword_attempt is not initialized, and we are in a
> language where doing an operation on an uninitialized value is undefined,
> erroneous or whatever other term is used for undefined disaster.
>
> Now the compiler can assume that npassword_attempts is not referenced,
> therefore it can assume that the if check on password is true, therefore
> it can omit the password check AARGH!
>
> This kind of backward propagation of undefinedness is indeed worrisome,
> but it is quite difficult to create a formal definition of undefined
> that prevents it.

But at least, in that case, the compiler could easily issue the  
(presumably not required by the standard) warning that the else branch is  
"unreachable code".

1/2 :-)

MfG Kai


Re: c/c++ validator

2005-06-20 Thread Kai Henningsen
[EMAIL PROTECTED] (Tommy Vercetti)  wrote on 19.06.05 in <[EMAIL PROTECTED]>:

> I was looking on different ones, for C, that claimed to have ability to find
> security problems. One that I found the best, is splint. But it's still not
> able to find such obvious problem:

Did you look at sparse? That seems to do quite a useful job on the Linux  
kernel (which is, of course, the main reason for its existence). I don't  
really have an idea how good it would be on non-kernel C code. (Not C++,  
obviously.)

MfG Kai


Re: basic VRP min/max range overflow question

2005-06-20 Thread Kai Henningsen
[EMAIL PROTECTED] (Robert Dewar)  wrote on 19.06.05 in <[EMAIL PROTECTED]>:

> Kai Henningsen wrote:
>
> > But at least, in that case, the compiler could easily issue the
> > (presumably not required by the standard) warning that the else branch is
> > "unreachable code".
>
> Yes, absolutely, a compiler should generate warnings as much as possible
> when it is making these kind of assujmptions. Sometimes this is difficult
> though, because the unexpected actions emerge from the depths of complex
> optimization algorithms that don't easily link back what they are doing to
> the source code.

Actually, the reason I named an unrechable code warning was on the  
presumption that the compiler would not necessarily realize the problem,  
but a part of the compilers reasoning would necessarily make that into  
unreachable code and thus could trigger the generic unreachable code  
warning completely independent of why that code is determined to be  
unreachable.

And of course, that's only applicable to that specific case.

> Actually an easier warning here is that npassword_attempts is uninitialized.
> That should be easy enough to generate (certainly GNAT would generate that
> warning in this situation).
>
> Working hard to generate good warnings is an important part of the compiler
> writers job, even if it is quite outside the scope of the formal standard.
> Being careful to look at warnings and not ignore them is an important part
> of the programmers job :-)

In this context, also see the warning controls project. I'm very, very  
happy that things are finally moving on that front.

MfG Kai


Re: Do C++ signed types have modulo semantics?

2005-06-30 Thread Kai Henningsen
[EMAIL PROTECTED] (Gabriel Dos Reis)  wrote on 27.06.05 in <[EMAIL PROTECTED]>:

> Nathan Sidwell <[EMAIL PROTECTED]> writes:
>
> | Gabriel Dos Reis wrote:
> |
> | > But a compiler could define them to be modulo -- that is the whole
> | > point.  The paragraph does not say they don't "modulo".
> |
> | of course it could do so, but then to be useful it should document it,
> | and it would be an extension.
>
> We're in violent agreement there.

I think there are actually two different interesting properties here.

One is that operations on signed integers can or can not do modulo  
arithmetic.

The other is that for control flow purposes, the compiler can or can not  
assume that no such overflow actually happens.

I think it might be useful to handle these separately - that is, have one  
set of flags handling modulo behaviour (-frapv, -ftrapv, neither, possibly  
others) and a different set for specifying control flow assumptions
(-fassume-no-wraps, -f-no-assume-no-wraps).

-fassume-no-wraps would then allow the compiler to take control flow  
shortcuts based on the assumption that no overflow actually happens, and
-fno-assume-no-wraps would guarantee that control flow handles all  
possible overflow cases.

I don't think it would make sense to specifically call out loops here.  
We've already seen an example where the actual difference happens outside  
the loop.

It might be a little tricky to get the combination of -fassume-no-wraps  
and -fwrapv/-ftrapv right. Or not, I wouldn't know. In any case,  
especially the -ftrapv variant seems useful (assume for control flow that  
no overflows happen, but if they do happen, trap).

It seems to me that most C programs will probably want -f-no-assume and  
neither wrapv option.

MfG Kai


Re: volatile semantics

2005-07-19 Thread Kai Henningsen
[EMAIL PROTECTED] (Gabriel Dos Reis)  wrote on 17.07.05 in <[EMAIL PROTECTED]>:

> Daniel Berlin <[EMAIL PROTECTED]> writes:
>
> | On Sun, 2005-07-17 at 00:05 +0200, Gabriel Dos Reis wrote:
> | > Daniel Berlin <[EMAIL PROTECTED]> writes:
> | >
> | > [...]
> | >
> | > | You make it sound like  the standard is crystal clear on this issue,
> | > and | everyone who disagrees with your viewpoint are just slimeballs
> | > trying to | get around the clear wording of the standard.
> | >
> | > I think you're profondly mistaken in your understanding of what I wrote.
> |
> | I read it another few times, and still looks the same to me.
> |
> | "The way I see it is that people who designed and wrote the standard
> | offer their view and interpretation of of they wrote and some people
> | are determined to offer a different interpretation so that they can
> | claim they are well-founded to apply  their transformations."
> |
> |
> | IE there are those whose opinion is right because "they wrote the
>
> see, here is where you added the transmutation.

Well, the more interesting part is the one after "and some". And I agree  
that it certainly reads rather insulting and confrontational - in fact, I  
can't see how else to interpret it.

Can't we keep the personal attacks out of these discussions?

MfG Kai


Re: PR 23046. Folding predicates involving TYPE_MAX_VALUE/TYPE_MIN_VA

2005-08-12 Thread Kai Henningsen
[EMAIL PROTECTED] (Richard Kenner)  wrote on 12.08.05 in <[EMAIL PROTECTED]>:

> What has to happen is that we need some sort of way of indicating that it's
> not permissible to derive information through a particular conversion.

That may be the only practical solution, but it seems to me it's not the  
perfect solution.

The point is that there are two different kinds of value range  
calculations. You have value range information from program flow, and you  
have value range information from types.

You want 'Valid optimization to ignore range information from types,  
because that's what you're checking for in the first place.

On the other hand, you *want* to use range information from program flow.  
For example, if you just assigned a constant, you *know* the exact range  
of the thing. Or maybe you already passed a program point where any out-of- 
range value would have been caught by an implicit 'Valid. It could  
optimize away a lot of implicit 'Valid checks done, say, on the same  
variable used as an array index multiple times.

Now that is certainly nontrivial to implement, and may not be worth it for  
gcc. But I believe it would be better than the other way.

MfG Kai


Re: Running ranlib after installation - okay or not?

2005-09-01 Thread Kai Henningsen
[EMAIL PROTECTED] (Andrew Pinski)  wrote on 31.08.05 in <[EMAIL PROTECTED]>:

> On Aug 31, 2005, at 2:02 PM, Ian Lance Taylor wrote:
>
> > Gerald Pfeifer <[EMAIL PROTECTED]> writes:
> >
> >> Does anyone disagree (and if not, have suggestions how to address this
> >> in GCC)?
> >
> > ranlib is basically never required on a modern system.  It is really
> > only needed if the archive is built with the S option to ar.
> >
> > So I think the best way to address this is to not run ranlib.
>
> If you consider Darwin "modern", then that statement is not correct
> as moving/copying an archive on darwin, requires ranlib to be run.

Is there a point to this behaviour? It sounds as if someone confused an  
archive with a nethack scorefile ...

MfG Kai


Re: Running ranlib after installation - okay or not?

2005-09-02 Thread Kai Henningsen
ian@airs.com (Ian Lance Taylor)  wrote on 01.09.05 in <[EMAIL PROTECTED]>:

> [EMAIL PROTECTED] (Kai Henningsen) writes:
>
> > [EMAIL PROTECTED] (Andrew Pinski)  wrote on 31.08.05 in
> > <[EMAIL PROTECTED]>:
> >
> > > If you consider Darwin "modern", then that statement is not correct
> > > as moving/copying an archive on darwin, requires ranlib to be run.
> >
> > Is there a point to this behaviour? It sounds as if someone confused an
> > archive with a nethack scorefile ...
>
> a.out archives used to work this way too, e.g. on SunOS 4.  The idea
> was that people would often use ar without updating the symbol table.
> Thus the symbol table has a timestamp.  The linker checks that the
> timestamp of the symbol table is not older than the file modification
> time of the archive.

But then all you have to do is copy the timestamp, too. This sounded more  
like saving inode numbers and stuff ...

I am, of course, accustomed to a cp that can copy timestamps. And I see  
that my install also has a -p option ...

MfG Kai


Re: -fprofile-generate and -fprofile-use

2005-09-02 Thread Kai Henningsen
Hi Janis,
[EMAIL PROTECTED] (Janis Johnson)  wrote on 01.09.05 in <[EMAIL PROTECTED]>:

[quoteto.xps]
> On Thu, Sep 01, 2005 at 11:45:35PM +0200, Steven Bosscher wrote:
> > On Thursday 01 September 2005 23:19, girish vaitheeswaran wrote:
> > > Sorry I still did not follow. This is what I
> > > understood. During Feedback optimization apart from
> > > the -fprofile-generate, one needs to turn on
> > > -fmove-loop-invariants.
> >
> > You don't "need to".  It just might help iff you are using a gcc 4.1
> > based compiler.
> >
> > > However this option is not
> > > recognized by the gcc 3.4.4 or 3.4.3 compilers. What
> > > am I missing?
> >
> > You are missing that
> > 1) this whole thread does not concern gcc 3.4.x; and
> > 2) the option -fmove-loop-invariants does not exist in 3.4.x.
>
> Girish started this thread about problems he is seeing with GCC 3.4.3

The discussion, maybe. The thread, definitely not - that was started by  
Peter Steinmetz (new subject, no References:). And it was explicitely  
about "using mainline":

] There was some discussion a few weeks ago about some apps running slower
] with FDO enabled.

...

] While this doesn't explain all of the degradations discussed (some were
] showing up on older versions of the compiler), it may explain some.

MfG Kai


Re: Running ranlib after installation - okay or not?

2005-09-02 Thread Kai Henningsen
ian@airs.com (Ian Lance Taylor)  wrote on 02.09.05 in <[EMAIL PROTECTED]>:

> [EMAIL PROTECTED] (Kai Henningsen) writes:
>
> > ian@airs.com (Ian Lance Taylor)  wrote on 01.09.05 in
> > <[EMAIL PROTECTED]>:
> >
> > > a.out archives used to work this way too, e.g. on SunOS 4.  The idea
> > > was that people would often use ar without updating the symbol table.
> > > Thus the symbol table has a timestamp.  The linker checks that the
> > > timestamp of the symbol table is not older than the file modification
> > > time of the archive.
> >
> > But then all you have to do is copy the timestamp, too. This sounded more
> > like saving inode numbers and stuff ...
> >
> > I am, of course, accustomed to a cp that can copy timestamps. And I see
> > that my install also has a -p option ...
>
> We're talking SunOS 4 here, which was just acting as earlier systems
> did.  Back then the options to cp were -i, -f and -r.  And install was
> a new fangled shell script that most packages didn't use.

Darwin isn't SunOS 4.

MfG Kai


Re: proposed Opengroup action for c99 command (XCU ERN 76)

2005-09-21 Thread Kai Henningsen
[EMAIL PROTECTED] (Joseph S. Myers)  wrote on 16.09.05 in <[EMAIL PROTECTED]>:

> C++ requires (A) and provides examples of valid programs where it can be
> told whether a normalisation of UCNs is part of the implementation-defined
> phase 1 transformation.  As I gave in a previous discussion,
>
> #include 
> #include 
> #define h(s) #s
> #define str(s) h(s)
> int
> main()
> {
>   if (strcmp(str(str(\u00c1)), "\"\\u00c1\"")) abort ();
>   if (strcmp(str(str(\u00C1)), "\"\\u00C1\"")) abort ();
> }

Incidentally, gcc 3.3.5 passes this test.

However, I'm far from convinced that this is reasonable for a compiler  
supporting UCNs. It seems to me they really ought to be handled more like  
trigraphs, so that this would be

  if (strcmp(str(str(\u00C1)), "\"\u00C1\"")) abort ();

(note one less backslash), and the case of the C would be irrelevant - in  
fact, *anything* that makes it relevant feels like a bug to me.

I think comparing \u00C1 to \xC1 is a false friend here - \xC1 is only  
valid inside a string. Instead look at identifiers.

If str(str(ab\00C1cd)) == "\"ab\\00C1cd\"", then I'll certainly be  
unpleasantly surprised - that is almost certainly not what I wanted.

MfG Kai


Re: Warning C vs C++

2005-09-21 Thread Kai Henningsen
[EMAIL PROTECTED] (Per Abrahamsen)  wrote on 19.09.05 in <[EMAIL PROTECTED]>:

> Robert Dewar <[EMAIL PROTECTED]> writes:
>
> > Per Abrahamsen wrote:
> >
> >> The idea was that you would be sure to get all the (boolean) warnings
> >> that are relevant for your project, and can give an explicit reason
> >> for each warning you don't want.
> >> It would be particularly useful when upgrading GCC, in order to be
> >> sure you get the benefit of any new boolean warnings added.
> >
> > Of course any generally useful new boolean warnings would be
> > included in -Wall.
>
> Yeah, but I want the specifically useful warnings as well :-)
>
> From my Makefile:
>
>   WARNING = -W -Wall -Wno-sign-compare \
> -Wconversion -Woverloaded-virtual \
> -Wsign-promo -Wundef -Wpointer-arith -Wwrite-strings
> #  -Wold-style-cast: triggered by header files for 2.95/woody
> #  -Wmissing-noreturn: triggered by some virtual functions.
> #  -Wmissing-prototypes -Wstrict-prototypes: Not C++ flags.
>
> At some point I went through the manual and added all the warning
> flags I could find, then commented out those that did not apply to my
> coding style or environment.  Apparently there were six additional
> flags that either didn't trigger any warnings on my code, or where I
> found a rewrite made the code clearer.

That is a pretty common way of doing things. Incidentally, any time I've  
done this, I wanted labels on warnings as to what option was responsible  
(which we finally seem to be getting) instead of guessing ... and more  
than once I'd have loved to keep about half of a warning flag whose other  
half was just noise: many of gcc's warning flags are too coarse-grained.

Also, there should probably be more documentation to "so, how do I avoid  
this particular warning, then?" - there are a number of cases where it's  
highly nonobvious. When all you can do to silence a warning is switch it  
off, that detracts from the value of said warning.

For example, the longjump endangers variable warning (I forget the exact  
wording) - how do I tell it that I don't care about this variable when  
there's a longjump, and how do I convince the code generator to not  
endanger this other variable? Without knowing that, what do I get from the  
warning?

MfG Kai


Re: Warning C vs C++

2005-09-24 Thread Kai Henningsen
[EMAIL PROTECTED] (DJ Delorie)  wrote on 21.09.05 in <[EMAIL PROTECTED]>:

> > Incidentally, any time I've done this, I wanted labels on warnings
> > as to what option was responsible
>
> -fdiagnostics-show-option

... as alluded to in the text immediately following the place yu snipped.

MfG Kai


Re: Warning C vs C++

2005-09-24 Thread Kai Henningsen
[EMAIL PROTECTED] (Ben Elliston)  wrote on 21.09.05 in <[EMAIL PROTECTED]>:

> Per Abrahamsen wrote:
>
> > A -Weverything that turned on all boolean warnings would be nice.  It
> > would be useless alone, but nice followed by a lot of
> > -Wno-somesillywarning -Wno-anothersillywarning arguments.
>
> I agree.  I acknowledge that it would be useless in the general sense (you
> couldn't use it in Makefiles),

That's exactly where I *would* use it. That's exactly where I currently  
have lots and lots of individual warning flags.

Oh, not in software intended to be ported to everything from a mainframe  
to a carefully-preserved 4004, of course. But not all software is like  
that. (And even in that case, it'd still be useful for a sort of  
"developer mode" environment (something still more restricted in  
applicability than maintainer mode).)

>but it would be nice to be able to use such
> an option for time to time to audit the code in the way that you might use
> lint(1).

I strongly prefer "time to time to audit the code" to mean "everytime I  
compile it". When the latest change causing the new message is still fresh  
on my mind, and so is the problem it was supposed to solve.

Just like, you know, *every* non-doc patch for gcc is supposed to have  
been checked against the testsuite.

MfG Kai


Re: Warning on C++ catch by value on non primitive types

2005-10-16 Thread Kai Henningsen
[EMAIL PROTECTED] (Thomas Costa)  wrote on 13.10.05 in <[EMAIL PROTECTED]>:

> On 13 Oct 2005, at 7:41 AM, Benjamin Kosnik wrote:
>
> >
> >
> >> yeah, if it were in one of those books it could be added to the -
> >> weff-c+
> >> + option. It doesn't seem sensible to add a different option for an
> >> alternative (set of?) coding rule(s).
> >>
> >
> > FYI this is item 13 in MEC++.
> >
>
> It is on just about any decent modern C++ coding guide/list somewhere.
>
> > I think this would be a good error to have. My suggestion is to
> > file an
> > enhancement request in gcc bugzilla, with this code:
> >
> > #include 
> >
> > void
> > foo()
> > {
> >   try
> > {
> > }
> >   catch (std::logic_error e)
> > {
> > }
> > }

So what you say is that any decent modern C++ coding guide/list wants to  
forbid catching the C++ standard exceptions, and anything derived from  
them?

How on earth can that count as "decent"?!

MfG Kai


Re: Porting GCC to RDOS and C++ issues

2005-12-31 Thread Kai Henningsen
[EMAIL PROTECTED] (Leif Ekblad)  wrote on 30.12.05 in <[EMAIL PROTECTED]>:

> Mike Stump:
> > make will build libgcc for the target, specifically, you should be
> > able to cd gcc && make libgcc.a to build it.
>
> It did when I added --host=rdos to the configuration script and
> changed a couple of other files. My only current problem is that
> since RDOS uses the .exe suffix for executables, the xgcc cross-compiler
> is named xgcc.exe, even though it is a Linux executable. This problem
> is fixed by copying this file to xgcc (no extension), and re-starting
> the make process. I don't know if this is a bug in some scripts, or
> it is some misconfiguration on my part. The system should know
> that Linux executables never have an .exe extension.

Sounds like you need to study up on the --build, --host, and --target  
options.

See, for example, the autoconf manual, "Specifying the System Type"; and  
the gcc internals manual, "6.1 Configure Terms and History", and go from  
there.

> After this build is complete, there is a gcclib.a file in the rdos-host
> directory. When I install it, however, the gcclib.a file isn't copied
> to /usr/local/rdos/lib, like the other libraries from the build without
> --host=rdos, but in another directory. I fixed this by copying the
> file to the correct location.

Yes, definitely looks like that. You should use these options exactly  
once, on the top-level configure, and use the same set for everything.

MfG Kai



Re: powerpc-eabi-gcc no implicit FPU usage

2010-02-06 Thread Kai Henningsen
Am Samstag, den 16.01.2010, 23:14 + schrieb Paul Brook:
> > >  > Is there a way to get GCC to only use the FPU when we explicitly want
> > >  > to use it (i.e. when we use doubles/floats)?  Is -msoft-float my only
> > >  > option here?  Is there any sort of #pragma that could do the same
> > >  > thing as -msoft-float (I didn't see one)?
> > >
> > >  To absolutely prevent use of FPRs, one must use -msoft-float.  The
> > >  hard-float and soft-float ABIs are incompatible and one cannot mix
> > >  object files.
> > 
> > There is a third option -mfloat-abi=softfp which stipulates that FP
> > instructions can be used within functions but the parameter and return
> > values are passed using the same conventions as soft float. soft and
> > softfp-compiled files can be linked together, allowing you to mix code
> > using FP instructions and not with source file granularity.
> 
> That's completely the opposite. mfloat-abi=softfp tells gcc to use FPU 
> instructions while conforming to a nominally soft-float ABI.
> 
> The OP wants the opposite: Conform to a hard-float ABI without actually using 
> FPU instructions. It is (in thory) possible to do this in a half-sane way, 
> however it's also easy to end up with something very fragile that breaks more 
> often than it works.

If the whole source file does not need to use fp, could you get the
intended effect using -ffixed-REG for all FP registers?



Re: Support for export keyword to use with C++ templates ?

2010-02-06 Thread Kai Henningsen
Dodji Seketeli wrote:
> On Sat, Jan 30, 2010 at 01:47:03AM +0200, Timothy Madden wrote:
>   
>> So nobody here wants to try a big thing ? :(
>> 
>
> This question strikes me as being not very fair because many GCC people 
> are already pretty much involved. Would you fancy giving a hand?
>
>   
>> How long would it take for someone to understand how parsing works in
>> g++ ? Or to understand the build system in gcc ?
>> 
>
> It depends (amongst other things) on the motivation of said person, I 
> guess. Maybe several months? As usual in this kind of things, I guess a 
> good approach is to try to scratch your own itches without thinking too 
> much about the time it would take, especially for something as big as 
> this particular feature you are talking about :-)
>
> Dodji
>   

I'd like to remind people that the EDG people expect this to need
something between two and three man-years. That seems to mean as a
full-time job. That's four to six *years* if you spend half your working
time on it.

Something to think about.


Re: gcc 4.4.1/linux 64bit: code crashes with -O3, works with -O2

2010-03-07 Thread Kai Henningsen

On 22.02.2010 22:41, Janis Johnson wrote:

On Mon, 2010-02-22 at 13:11 -0800, Andrew Pinski wrote:

On Mon, Feb 22, 2010 at 1:06 PM, Janis Johnson  wrote:

If you can reproduce the problem with a small, self-contained test then
please file a bug report.  It might be possible to issue a warning or
to detect that the loop should not be vectorized.  If not, maybe the
compiler should disable vectorization for -fno-strict-aliasing.


It is not an aliasing issue but an alignment issue.  Anyways this is
most likely the same as
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=43009 .


Yes, that's the problem I had in mind but I was thinking about an
explicit cast to a pointer to more-aligned data in the function that has
the vector loop.  There's no way to warn about the undefined behavior
when the cast is in a different source file.

It's interesting that two reports of failure due to this same undefined
behavior come so close together.


I wonder if it might be feasible to have a flag which inserts code to 
check for alignment assumptions and complain if they're not met, to 
check for things like this.


(And then possibly to insert similar code for other assumptions gcc 
makes that are fairly easy to check at runtime.)


Not alternate code paths, just something like an assert that the code 
behaves like gcc feels justified in assuming it should, and the 
programmer didn't do anything stupid - as long as it's fairly simple to 
check, such as the actual alignment of a pointer or perhaps the actual 
limit of a loop or some such; and as long as the assumption actually 
influences code generation decisions.


Re: Echte Lokaliserung der Programmbausprache/ Real Localisation of Programming Language

2008-10-06 Thread Kai Henningsen
On Mon, 6 Oct 2008 18:42:17 +0100
"Dave Korn" <[EMAIL PROTECTED]> wrote:

> Rüdiger Müller wrote on 06 October 2008 17:55:
> 
>   God no.  Think of the maintenance nightmare.
> 
>   You're not the first person to come up with this idea, and you
> probably won't be the last, but it's a misbegotten idea, and there's

In fact, I believe it came up around the time when COBOL was invented.
And you'll notice that it didn't get implemented back then, even though
people thought it wouldn't be all that hard to do.

> a very good reason why it hasn't been done before, and that's not

Actually, that's not true.

In my Apple ][+ days, I've seen it done with BASIC. For some reason, it
never amounted to more than a toy.

I'm sure someone somewhere is doing it to a programming language right
now. Poor language.


Re: build system: gcc_cv_libc_provides_ssp and NATIVE_SYSTEM_HEADER_DIR

2008-10-12 Thread Kai Henningsen
On Fri, 10 Oct 2008 11:24:22 + (UTC)
"Joseph S. Myers" <[EMAIL PROTECTED]> wrote:

> On Fri, 10 Oct 2008, Thorsten Glaser wrote:
> 
> > Thomas Schwinge dixit:
> > 
> > >Ideally, IMO, this test (for stack-smashing-protection support in
> > >glibc) should not be done by grepping through SYSROOT's
> > >features.h, but instead by using the CPP for doing that.
> > 
> > Why not just autoconf?
> > 
> > Check for the presence of __stack_smash_handler() in libc… or am I
> > missing something important here?
> 
> It's desirable to be able to configure GCC correctly in the presence
> of installed headers and only a dummy libc.so, so as to get a GCC
> that can be used to build the full glibc.  See e.g. the documented
> bootstrap procedure at 
> .
>   
> As such, testing for features of the libc binary in order to build
> the core compiler is a bad idea.  The more configuration dependencies
> you put between the compiler and the library, the more complicated
> the bootstrap procedure becomes.

So ... we have a list of possible paths for various target variants.
Why not simply look into all of them? The list isn't particularly long,
and there's no reason to assume more than one will be available ... or
is there?


Re: [lto][RFC] Do not emit hybrid object files

2008-10-18 Thread Kai Henningsen
Am Fri, 17 Oct 2008 14:01:35 -0600
schrieb Jeff Law <[EMAIL PROTECTED]>:

> Diego Novillo wrote:
> > On Fri, Oct 17, 2008 at 15:40, Ollie Wild <[EMAIL PROTECTED]> wrote:
> >   
> >> On Fri, Oct 17, 2008 at 12:32 PM, Diego Novillo
> >> <[EMAIL PROTECTED]> wrote: 
> >>> lto1 (even if -flto is not provided) and eventually we'll need to
> >>> support archives in the reader.
> >>>   
> >> Will we?  I thought one of the main justifications for
> >> implementing a plugin architecture in the linker was to avoid
> >> having to do this in collect2.
> >> 
> >
> > Well, it will likely be needed to support GNU ld.  I'm assuming that
> > not everyone will use gold.  Likewise, support for non-ELF
> > architectures may need to be added at some point.
> >   
> I'm not really involved in the LTO stuff at all, but my
> recommendation would be to severely de-emphasize any non-ELF targets
> -- to the point where I'd say LTO is only supported on ELF targets.
> 
> Reality is there aren't too many non-ELF targets that matter anymore 
> and, IMHO, it's reasonable to demand ELF support for LTO.  The only 
> other format that has a reasonable chance of working would be the
> COFF variants anyway and the only COFF variant that is meaningful is
> used by AIX (and perhaps Darwin?!?).

s/COFF/PECOFF/ s/AIX/Windows/

presumably matters quite a bit more than AIX does, even if we don't
like the corporation behind it.


Re: GCC 4.4.0 Status Report (2008-11-27)

2008-12-23 Thread Kai Henningsen
Am Thu, 27 Nov 2008 18:30:44 + (UTC)
schrieb "Joseph S. Myers" :

> There are a total of 5150 open bugs in Bugzilla, counting both
> regressions and non-regressions.  It seems quite likely that many of
> the older bugs have in fact been fixed since they were filed, but we
> don't have any good procedures for occasionally reviewing bugs to see
> if they are still applicable to current trunk.

Assuming some random volunteer wants to take a stab at (part of) this,
what would (s)he need to do so this is actually useful, and especially
how would (s)he best communicate their results?

Hmm ... if a useful job description can be made, this could perhaps be
added to the newbie project list?


Re: Plugin API Comments (was Re: GCC Plug-in Framework ready to port)

2009-02-03 Thread Kai Henningsen
On Sun, Feb 1, 2009 at 04:06, Sean Callanan  wrote:

> We also have a magic argument called FILE that lets you load arguments from
> a file.

That's what @ arguments are for. Which argues for not concatenating arguments.

Would it be a problem to do

-plugin=myplugin -plugin-myplugin-arg1=stuff -plugin-myplugin-someflag ...

(finding /some/path/myplugin.so or C:\some\path\myplugin.dll in the
plugin search path)?


Re: messaging

2009-04-13 Thread Kai Henningsen

Arthur Schwarz schrieb:

In the following code fragment:

# include 
# include 
# include 

using namespace std;
void CommandLine(int argc, char** argv);
int main(int argc, char** argv) {
   CommandLine(argc, argv[]);
   ifstream x.open(argv[1], ios:in);
   ofstream y.open(argv[1], ios::in);
   
   return 0;

};

g++-4 messaging is:

g++-4 x.cpp

x.cpp: In function 'int main(int, char**)':
x.cpp:8: error: expected primary-expression before ']' token
x.cpp:10: error: expected primary-expression before ':' token

A recommendation and reason for change is:
1: x.cpp:8 error: illegal to pass an array without subscript value as an 
   argument

   The given message is accurate but non-expressive of the reason
   for failure.


Actually, in this case I'd say that the original message is perfectly 
fine, and your suggestion is rather confusing. However, what one could 
say here is something like "[] is only allowed in declarations".




3: cpp:10 error: illegal scope resolution operator ':'
   From memory, there are three uses of ':' in C++
   ':'   label terminator, :
   ':'   case in a switch statement, case :
   ':'   scope resolution operator, "::"
   The given diagnostic message is deceptive. 


Could perhaps say "':' is not a scope resolution operator", unless 
someone comes up with a use case where it is ...


Re: messaging

2009-04-14 Thread Kai Henningsen

aschwarz1...@verizon.net schrieb:

Thanks Kai. I do have what I hope is a more specific subjective reason for 
saying that I think the existing diagnostics should be changed. Fundamentally, 
what is provided in the messaging is not an indication of what is wrong, but an 
indication of what is required to repair the fault. My objections then become:
1: As an old man full of wisdom, most developers can't distinguish a
   'primary-expression' from a washing machine. Further, to determine


Well, here I think that such people should perhaps put down the keyboard 
and back away from using the compiler slowly. That's about the same as 
driving a car without knowing what a stop sign means.


At least if they're unable to infer that a primary expression is a kind 
of expression, and there's no expression between [ and ].



   what correction might be needed most users would have to research
   the C++ standard (or other standardized document supported by the
   g++ development community) to find out exactly what constitutes a
   'primary-expression'. 


Let me put it like this: in this particular case, either it isn't 
particularly hard for the progammer to realise that he left out the 
index expression he wanted to write, or if that wasn't the mistake it 
demonstrates a rather fundamental misunderstanding of the language and 
he desperately needs to consult *something* to learn what he's been 
missing - no possible compiler message could close this hole in education.



2: It puts an obligation on the g++ development community to ensure
   that the messaging is consistent with documentation and that if the 
   term 'primary-expression' changes then g++ will change the messaging
   to conform to the new term. 


It's directly from the language standard. If the standard changes 
(presumably for more reason than not liking the term), that place in the 
compiler needs changing anyway.


And really, I *like* it when the compiler uses terms directly from the 
language standard, instead of inventing some other terms. I can search 
for those terms in the standard, and most other people talking about the 
standard will use the same terms.



3: The cause of the error is more specific than it's solution. The cause
   of the fault is the user (in this case me) provided something that
   was wrong. It wasn't the lack of a 'primary-expression' but the
   existence of the illegal constructs. My conjecture is that if the
   message says "you did this wrong" then the user would have an easy
   time of finding a fix.


It could well have been a typo for all the compiler knows, where you 
inadvertantly left out the index.



I don't argue with the details of my wording. My intent is not to show that I am a better 
wordsmith but that the existing diagnostic messages are not specific enough. From Item 1: 
above, in order for the average user to fix the error the user must research the terms 
used, then compare the syntax given with the actual fault, and then fix the error. If the 
message say "this is the fault", the research goes the way of the 
woolly-mammoth.

The paradigm is that the message should provide the minimum amount of 
information required to identify the syntax/semantics which caused the failure.


And in this case, I believe that the original message does just that, 
whereas your proposal doesn't.




Re: 4.3 weekly snapshots bot broken?

2009-06-17 Thread Kai Henningsen

Joseph S. Myers schrieb:
If you are interested in following the fine points of breakage of 
individual snapshots or other individual jobs run from cron, you should 
follow the gccadmin and overseers lists, where you would have seen the 
message showing the breakage and the subsequent discussion of the fix.


Is it possible to subscribe to those, or can the public only follow via 
the web archives?


Re: RTL alias analysis

2006-01-21 Thread Kai Henningsen
ian@airs.com (Ian Lance Taylor)  wrote on 20.01.06 in <[EMAIL PROTECTED]>:

> When dealing with unions, you can take pointers to different fields in
> the unions.  If the fields have different types, these pointers can
> have non-conflicting alias sets.  Therefore within a single union the
> same memory can be read or written by different pointers.  This is
> considered to be invalid--a valid program is required to always access
> the memory within the union in the same type, except if you access the
> memory via the union type itself (this permission being a gcc
> extension).

void test(void)
{
union { int i; double d; } u;
int *ip;
double *dp;
int ii;
double dd;

ip = &u.i;
*ip = 15;
ii = *ip;
dp = &u.d;
*dp = 1.5;
dd = *dp;
printf("ii=%d dd=%f\n", ii, dd);
}

So you're saying this function is not valid?

MfG Kai


Re: RTL alias analysis

2006-01-22 Thread Kai Henningsen
ian@airs.com (Ian Lance Taylor)  wrote on 21.01.06 in <[EMAIL PROTECTED]>:

> "Dave Korn" <[EMAIL PROTECTED]> writes:
>
> >   I think he's saying that _this_ one might generate invalid code:
> >
> > void test(void)
> > {
> > union { int i; double d; } u;
> > int *ip;
> > double *dp;
> > int ii;
> > double dd;
> >
> > dp = &u.d;
> > ip = &u.i;
> > *ip = 15;
> > ii = *ip;
> > *dp = 1.5;
> > dd = *dp;
> > printf("ii=%d dd=%f\n", ii, dd);
> > }
>
> That function is valid too.
>
> Here is an example of an invalid function:
>
> void test(void)
> {
> union { int i; double d; } u;
> int *ip;
> double *dp;
> int ii;
> double dd;
>
> dp = &u.d;
> ip = &u.i;
> *ip = 15;
> *dp = 1.5;
> ii = *ip;
> dd = *dp;
> printf("ii=%d dd=%f\n", ii, dd);
> }

And of course(?), stack slot sharing is supposed to be like the first two  
examples, not the last.

Hmm. I think I begin to see what this is about. RTL AA (if I got this  
right) gets confused when two vars at the same address *can't* be  
distinguished by type, so that would be

void test(void)
{
union { int i1; int i2; } u;
int *ip1;
int *ip2;
int ii1;
int ii2;
ip1 = &u.i1;
ip2 = &u.i2;
*ip1 = 15;
ii1 = *ip1;
*ip2 = 17;
ii2 = *ip2;
printf("ii1=%d ii2=%d\n", ii1, ii2);
}

Hmm. I don't know if ISO allows *that* ... though I can see how it might  
arise naturally.

MfG Kai


Re: Eliminate CHAR_TYPE from config/sh/sh.h

2006-02-10 Thread Kai Henningsen
[EMAIL PROTECTED] (Roger Sayle)  wrote on 09.02.06 in <[EMAIL PROTECTED]>:

> On Thu, 9 Feb 2006, Kaz Kojima wrote:
> > Here is a patch to remove CHAR_TYPE from config/sh/sh.h.
>
> My apologies to the SH folks.  I did check the entire tree with
> find . -type f -exec grep CHAR_TYPE {} \; -print, but unfortunately
> the high incidence of false positives for CHAR_TYPE_SIZE and
> *WCHAR_TYPE, etc.. meant I had to audit each hit by hand/eye.
> Hopefully, this is the only instance I missed.

Well, at least the GNU version of grep has -w, and I'd be surprised if  
it's unique in that.


MfG Kai


Re: "Experimental" features in releases

2006-04-19 Thread Kai Henningsen
[EMAIL PROTECTED] (Daniel Berlin)  wrote on 18.04.06 in <[EMAIL PROTECTED]>:

> This is in fact, not terribly surprising, since the algorithm used was the
> result of Sebastian and I sitting at my whiteboard for 30 minutes trying to
> figure out what we'd need to do to make swim happy :).

> This would leave -ftree-loop-linear in 4.2, but make it not useful for
> increasing SPEC scores.

So is this an object lesson for why optimizing for benchmarks is a bad  
idea?

MfG Kai


Re: Summer of Code project discussion

2006-05-04 Thread Kai Henningsen
[EMAIL PROTECTED] (Mark Mitchell)  wrote on 03.05.06 in <[EMAIL PROTECTED]>:

> To make this work, we have to be careful not to generate as much garbage
> as we presently do, as we'll needlessly waste space in these pools.
> Right now, we're using GC partly to compensate for things like using
> trees to represent strictly temporary data.

You can always include the option for temporary pools, which you throw  
away after the temporary trees are no longer used - if you can make sure  
you know that.

You probably need to give pool arguments to a number of functions, anyway.

MfG Kai


Re: SVN: Checksum mismatch problem

2006-05-24 Thread Kai Henningsen
[EMAIL PROTECTED] (Russ Allbery)  wrote on 22.05.06 in <[EMAIL PROTECTED]>:

> Bruce Korb <[EMAIL PROTECTED]> writes:
>
> > I do that also, but I am also careful to prune repository
> > directories (CVS, .svn or SCCS even).  I rather doubt it is my RAM,
> > BTW.  Perhaps a disk sector, but I'll never know now.  (Were it RAM,
> > the failure would be random and not just the one file.)  The original
> > data were rm-ed and replaced with a new pull of the Ada code.
>
> Yup, I've seen change of capitalization of a single letter in files due to
> bad disk sectors before, even on relatively modern hardware.  It's a
> single bit error, so it's an explainable failure mode.

And there's enough involved so that a disgnosis is almost impossible until  
you get a lot more errors.

Memory. Disk. Controller. Cpu ...

... or it could just be a lone alpha particle hitting the bit in one of  
those places.

Could also be a software error. A stray pointer in the kernel, being used  
to set a flag, and happening to point into a buffer for that disk block.

Until it gets reproducible - or reproduces itself ... no way to tell.

MfG Kai


Re: Details for svn test repository

2005-02-13 Thread Kai Henningsen
[EMAIL PROTECTED] (Paul Schlie)  wrote on 11.02.05 in <[EMAIL PROTECTED]>:

> - I apparently misinterpreted:
>
>   http://svn.collab.net/viewcvs/svn/trunk/
>
>   as was viewing it via viewcvs (which I now understand is svn friendly)

In general, these days, /viewcvs/cvs/... will access a CVS repository, and  
/viewcvs/svn/... will access a SVN reporitory.

MfG Kai


Re: LC_COLLATE (was Re: SVN Test Repo updated)

2005-02-17 Thread Kai Henningsen
[EMAIL PROTECTED] (Paolo Bonzini)  wrote on 17.02.05 in <[EMAIL PROTECTED]>:

> > The sort alghorithm has nothing to do with ls, but with your selection of
> > LC_COLLATE.  But then, BSD (at least the variant used in MacOSX) is way
> > behind current l10n standards.
>
> At least they do not break s/[A-Z]// which on "well-internationalized"
> OSes is case-insensitive with most locales other than C.
>
> I still haven't dug enough to understand if the responsible for this is
> the POSIX specification for localization, the ANSI specification for
> strcoll, or somebody in the glibc team.  But I know that it was the
> most-reported sed "bug" before I explicitly flagged it as a non-bug in
> the manual.
>
> I can only guess the outcry if Perl started obeying LC_COLLATE.

What do you mean, "started"? It's been doing that for years now.

MfG Kai


Re: RFC: objc_msgSend efficiency patch

2005-02-22 Thread Kai Henningsen
[EMAIL PROTECTED] (Dale Johannesen)  wrote on 21.02.05 in <[EMAIL PROTECTED]>:

> Simple Objective C programs such as
>
> #include 
> void foo(void) {
>Object *o;
>[o++ free];
> }
>
> result in calling objc_msgSend indirectly through a pointer, instead
> of directly as they did in 3.3.  This seems to happen only at low
> optimization
> levels; still, it's a performance regression.  The reason is that the
> gimplifier
> puts the result of the OBJ_TYPE_REF into a temp due to the
> postincrement.

I thought current tree optimizers should be able to figure this out?

MfG Kai


Re: Memory leaks in compiler

2008-01-17 Thread Kai Henningsen
On Thu, Jan 17, 2008 at 02:46:12PM -, Dave Korn wrote:
> On 16 January 2008 22:09, Diego Novillo wrote:
> 
> > On 1/16/08 4:16 PM, Andrew Haley wrote:
> > 
> >> Because it's not a bug?  You're changing the code to silence a false
> >> negative, which this is what we here in England call "putting the cart
> >> before the horse."  If we clean up all the memory regions on closedown
> >> we'll be wasting CPU time.  And for what?
> > 
> > I agree.  Freeing memory right before we exit is a waste of time.
> 
>   So, no gcc without an MMU and virtual memory platform ever again?  Shame, it
> used to run on Amigas.

You mean the Amiga didn't automatically free all process memory on
termination, the way MS-DOS did (without an MMU and virtual memory
platform)? (Unless, of course, you expressly asked for that not to
happen, by calling "terminate and stay resident".)


Re: GCC 4.3 target deprecation proposals

2008-01-29 Thread Kai Henningsen
On Wed, Jan 23, 2008 at 07:28:23PM -0500, DJ Delorie wrote:
> 
> > You can't cross-test, with DejaGnu running elsewhere?
> 
> I've tried.  The problem is communication between the DOS system (or
> emulator) and the host system.  DOS isn't kind to networking,
> semaphores, or anything else that hints at multiprocessing.

Under Linux with DOSEMU, it should be fairly simple.

Create a named pipe.

On the DOS side, use a shell script to read one line from the pipe.

On the Linux side, prepare a DOS shell script or batch for the DOS side
to execute. (Remember the carriage returns where appropriate.) Then,
echo a crlf into the named pipe.

The read on the DOS side will now finish. Go execute the job. Then echo
your return code into a file, read another line from the named pipe, and
loop.

On the Linux side, echo another crlf into the named pipe; this waits for
the DOS side. Then read the return code from the file.


You might (with a sufficiently modern bash on the Linux side) write your
script with something like
#! /bin/bash
printf "%q " "$0" "$@" > script
echo -e '\r' > thepipe
echo Waiting for return code
echo -e '\r' > thepipe
read rc < rc
exit $rc
and link this to every command name you want to be able to execute.

On the DOS side,
:
while true
do
echo Waiting for command
read dummy < thepipe
sh script
echo $? > rc
read dummy < thepipe
done

Disclaimer: I haven't really done more than a quick test of named pipe
reading. I'm sure there are some subtilities here.


Re: Defining a common plugin machinery

2008-09-19 Thread Kai Henningsen
On Fri, Sep 19, 2008 at 15:30, Brian Dessent <[EMAIL PROTECTED]> wrote:
> Ralf Wildenhues wrote:

> Is it really that far fetched to have the plugin not directly access
> anything from the executable's symbol table but instead be passed a
> structure that contains a defined set of interfaces and callbacks?  By
> using this strategy you also insulate the plugin from the gcc internals,

Too much so. This goes again back to needing to decide in advance what
will be interesting; that's pretty much a deal-breaker, I think.

No, it sounds to me as if linking at runtime is the way to go for PE.
However, that's not as hard as it sounds.

Essentially, *for each plugin*, you compile a list of all the gcc
symbols that *this plugin* needs. You convert this list into a
structure with a long list of pointers and names, which you hand to a
small routine that resolves all of them (via dlsym(),
GetProcAddress(), or whatever); and the plugin uses the stuff vioa the
pointers in the structure. As the list of symbols needed for that
plugin is both known and relatively stable, the burden from that isn't
too large.

In gcc, you just make all symbols resolvable.

Gcc then loads the plugin, gets a pointer to that structure (via a
well-known symbol name), resolves all symbols therein, and calls some
initialization function so the plugin can register in all relevant
hooks - or one could do even that by putting additional data into that
structure.

And one can also put versioning data therein, so that gcc can check
compatibility before calling the first plugin routine.

Just a quick&dirty mock-up:

struct s_gcc_plugin_info gcc_plugin_info = {
  .version = "4.4.0";
  .hook = {
 { .name="hook_parse_params"; .proc=&my_parse_params; }
[...]
  };
  .link = {
{ .name="fold"; .ptr = 0; }
   [...]
  };
};

and possibly, for convenience,

#define gcc_fold ((prototype_for_fold)(gcc_plugin_info.link[0].ptr))

... gcc_fold(...) ...

All those .name fields are presumably const char *.

And one could presumably write a small utility that, given a list of
names, goes through gcc's headers and constructs that struct and the
convenience defines automatically. If (as is the case here) the
declarations follow a fairly consistent style, this doesn't need a
full C parser.

It seems to me that the overhead, both in runtime and in maintenance,
isn't too bad here.


Re: no symbol in current context problem when debug the program in gdb

2008-09-20 Thread Kai Henningsen
Please don't crosspost between gcc and gcc-help. Thanks.

On Sat, Sep 20, 2008 at 02:48, Peng Yu <[EMAIL PROTECTED]> wrote:
> On Mon, Sep 15, 2008 at 2:54 PM, Peng Yu <[EMAIL PROTECTED]> wrote:
>>
>> Hi,
>>
>> I have the following program. When I step in to test's constructor, I
>> would be able to print the variable three. It says
>> (gdb) n
>> 7 T three = 3;
>> (gdb) n
>> 8 std::cout << three << std::endl;
>> (gdb) p three
>> No symbol "three" in current context.
>>
>> According to gdb mailing list, this is a bug in GCC. I'm wondering if
>> this issue has been resolved in the later versions of GCC.

Isn't this a case of the stuff that the
var-tracking-assignments-branch
(http://gcc.gnu.org/wiki/Var_Tracking_Assignments) tries to fix?


Can CODE_FOR_$(div$V$I$a3$) ever match?

2007-11-01 Thread Kai Henningsen
This is genopinit.c:92 (sdivv_optab) (in revision 127595).

I read this as "the next mode must be a full integer mode; add a v if it
is a float mode". Which is doubly strange as this is the only place
where $V is used.

Am I missing something here, or is this a bug?


Re: [wwwdocs] PATCH Re: Optimization of conditional access to globals: thread-unsafe?

2007-11-04 Thread Kai Henningsen
On Sun, Nov 04, 2007 at 02:04:21PM +0100, Gerald Pfeifer wrote:
> On Sun, 28 Oct 2007, Andreas Schwab wrote:
> >> I don't have access to the POSIX standard itself
> > See .
> 
> Now added to our "Links and Selected Readings" page; thanks for the
> pointer, Andreas!

While you're at it, you might want to add a link to the Austin Group
which is developing that standard. Access to the documents needs only
membership in their mailing list.

It's at http://www.opengroup.org/austin/.



Re: Designs for better debug info in GCC

2007-12-17 Thread Kai Henningsen
On Tue, Dec 18, 2007 at 02:38:31AM -0200, Alexandre Oliva wrote:

> Would reformatting these and stamping a title on top make it worthy of
> your interest?

Actually, I think that *would* help (though, of course, it's impossible
to predict if it would help *enough*).

I've noticed before (though this thread is a particularly extreme
example) that GCC developers seem no more immune than other people, from
being able to ignore what's in a mail message (or news article) they're
replying to, even up to ignoring the carefully-selected part they're
quoting.

I don't claim to understand it (nor to be completely immune to it
myself), but I'm no longer surprised by it. Disappointed, but not
surprised.

Anyway, the point is that this seems much rarer when the subject is
*not* in the inbox or a newsgroup. For whatever reason, people apply
their reading skills differently in different situations.

So, my advice would be:

1. Wait a while, so people have time to calm down.

2. Reformat and reorganize the stuff.

3. Put it in an obviously different format - say, give a link to a PDF,
instead of putting it in a mail to this list.

Oh, and it probably wouldn't hurt to give a short summary of what you
did to the various optimizers, including mentioning "no change", *after*
you know that that actually works. (For a work in progress, people seem
to often disbelieve such claims, however well justified ... at least, if
they're already looking hard for arguments against it, however
spurious.)

And no, I have no idea why this particular discussion degenerated so
badly, and similar others didn't. Your style of argumentation may not
have been perfect, but the same can be said for many other people here,
and it doesn't always seem to lead to a meltdown. Maybe it depends on
unpredictable factors like the mood people are in when they go reading
their mail.