Enforcing order of execution for function arguments

2007-01-10 Thread Chris Jefferson

Apologises for the slightly off-topic message.

One thing which comes up regularly in various C and C++ messageboards
is that statements like "f() + g()" and "a(f(), g())" do not declare
which order f() and g() will be executed in.

How hard would it be to fix the order of execution in gcc/g++? Could
someone point me to the piece of code which must change, or if it is
only a very small change, the actual change required? I would very
much like to be able to benchmark this, as I can find no previous case
where someone has tried fixing the order of execution to see if it
actually makes any measureable difference.

Would anyone be interested in this being added as a command line argument?

Thank you,

Chris


Re: __builtin_cpow((0,0),(0,0))

2005-03-07 Thread Chris Jefferson
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Duncan Sands wrote:
| On Mon, 2005-03-07 at 10:51 -0500, Robert Dewar wrote:
|
|>Paolo Carlini wrote:
|>
|>>Andrew Haley wrote:
|>>
|>>
|>>>F9.4.4 requires pow (x, 0) to return 1 for any x, even NaN.
|>>>
|>>>
|>>
|>>Indeed. My point, basically, is that consistency appear to require the
|>>very same behavior for *complex* zero^zero.
|>
|>I am not sure, it looks like the standard is deliberately vague here,
|>and is not requiring this result.
|
|
| Mathematically speaking zero^zero is undefined, so it should be NaN.
| This already clear for real numbers: consider x^0 where x decreases
| to zero.  This is always 1, so you could deduce that 0^0 should be 1.
| However, consider 0^x where x decreases to zero.  This is always 0, so
| you could deduce that 0^0 should be 0.  In fact the limit of x^y
| where x and y decrease to 0 does not exist, even if you exclude the
| degenerate cases where x=0 or y=0.  This is why there is no reasonable
| mathematical value for 0^0.
|
That is true.
However, on the other hand, however the standard says looks to me to say
0^0=1. Also printf("%f",pow(0.0,0.0)) returns 1.0 on both VC++6 and g++
3.3 (just what I happen to have lying around..)
I would agree with Paolo that the most imporant point is arguably
consistency, and it looks like that is pow(0.0,0.0)=1
Chris
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org
iD8DBQFCLH5w3FpDzErifpIRAnD9AJ9l7UCtrifXS+EuC70CaIE6oE7QzQCfbWO9
RwBroPYweiQx7cDWodYjB5k=
=nrh7
-END PGP SIGNATURE-


Re:[OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Chris Jefferson
Ronny Peine wrote:
Well, i'm studying mathematics and as i know so far 0^0 is always 1 
(for real and complex numbers) and well defined even in numerical and 
theoretical mathematics. Could you point me to some publications which 
say other things?

cu, Ronny
Just wanting to put in my mathematical opinion as well (sorry), I'm 
personally of the opinion that you can define 0^0 to be whatever you 
like. Define it to be 0,1 or 27. Also feel free to define 1^1 to be 
whatever you like as well, make it 400 if you like.

Maths is much less written in stone than a lot of people think. However, 
the main argument here is which definition of 0^0 would be most useful.

One of the most important things I think personally is that I usually 
consider floating point arithmetic to be closely linked to range 
arithmetic. For this reason it is very important that the various 
functions in volved are continus, as you hope that a small permutation 
of the input values will lead to a small permutation of the output 
values, else error will grow too quickly.

Any definition of 0^0 will break this condition, as there are places 
where you can approach it and be equal 0, and places where you can 
approach it and be equal 1. Therefore it is probably best to leave it 
undefined.

What we are debating here isn't really maths at all, just the definition 
which will be most useful and least suprising (and perhaps also what 
various standards tell us to use).

Chris


Re:[OT] __builtin_cpow((0,0),(0,0))

2005-03-08 Thread Chris Jefferson
Ronny Peine wrote:
Well, i'm studying mathematics and as i know so far 0^0 is always 1 
(for real and complex numbers) and well defined even in numerical and 
theoretical mathematics. Could you point me to some publications which 
say other things?

cu, Ronny
Just wanting to put in my mathematical opinion as well (sorry), I'm 
personally of the opinion that you can define 0^0 to be whatever you 
like. Define it to be 0,1 or 27. Also feel free to define 1^1 to be 
whatever you like as well, make it 400 if you like.

Maths is much less written in stone than a lot of people think. However, 
the main argument here is which definition of 0^0 would be most useful.

One of the most important things I think personally is that I usually 
consider floating point arithmetic to be closely linked to range 
arithmetic. For this reason it is very important that the various 
functions in volved are continus, as you hope that a small permutation 
of the input values will lead to a small permutation of the output 
values, else error will grow too quickly.

Any definition of 0^0 will break this condition, as there are places 
where you can approach it and be equal 0, and places where you can 
approach it and be equal 1. Therefore it is probably best to leave it 
undefined.

What we are debating here isn't really maths at all, just the definition 
which will be most useful and least suprising (and perhaps also what 
various standards tell us to use).

Chris


Re: question on semantics

2005-05-04 Thread chris jefferson
Chris Friesen wrote:
I'm not sure who I should address this to...I hope this is correct.
If I share memory between two processes, and protect access to the 
memory using standard locking (fcntl(), for instance), do I need to 
specify that the memory is volatile?  Or is the fact that I'm using 
fcntl() enough to force the compiler to not optimise away memory 
accesses?

As an example, consider the following code, with / 
replaced with the appropriate fcntl() code:

int *b;
int test()
{
b=;
while(1) {

if (*b) {
break;
}

}

return *b;
}
Without the locks, the compiler is free to only load *b once (and in 
fact gcc does so).  Is the addition of the locks sufficient to force 
*b to be re-read each time, or do I need to declare it as

volatile int *b;
Offically you have to declare volatile int *b. Although I can't be sure 
however, looking at this sample of code gcc will re-read the values 
anyway, as if fcntl are included in a seperate binary, gcc will probably 
not be able to tell that the value *b couldn't be changed by the call 
the fcntl, so will dump it to memory before the function call and read 
it back afterwards. While it's a little dodgy, in the past I've often 
made sure gcc re-reads memory locations by passing a pointer to them to 
a function compiled in a seperate unit. If gcc ever gets some kind of 
super-funky cross-unit binary optimisation, then this might get 
optimised away, but I wouldn't expect such a thing soon :)

Chris

Thanks,
Chris



Re: How to get MIN_EXPR without using deprecated min operator

2005-05-06 Thread chris jefferson
Michael Cieslinski wrote:
Consider the following short program:
   #include 
   
   void Tst1(short* __restrict__ SrcP, short* __restrict__ MinP, int Len)
   {
   for (int x=0; x
   MinP[x] = SrcP[x] 
   }
   
   void Tst2(short* __restrict__ SrcP, short* __restrict__ MinP, int Len)
   {
   for (int x=0; x
   MinP[x] = std::min(SrcP[x], MinP[x]);
   }

If I compile it with
   gcc41 -O2 -ftree-vectorize -ftree-vectorizer-verbose=5
function Tst1 gets vectorized but Tst2 not.
The reason for this is 
 

Out of interest, do you get vectorisation from:
MinP[x] = (SrcP[x]
Chris


Re: Validating a C++ back-end

2005-05-10 Thread chris jefferson
Vasanth wrote:
Hi,
I am working on a fresh C++ port and I am filling in all the machine
specific hooks.
How do I run the C++ testsuite on my compiler? I am familiar with the
GCC torture/execute tests and have my backend passing those tests
reasonably well. Now, I am looking for something similar for C++ to
test my support for the language's features.
I am a bit confused by how the Deja Gnu tests are organized. I see
that the testsuite contains a lot of different tests of which, some
are compile only tests, some are pre-processor tests etc. Aren't the
"runnable" tests the main kind that I should be interested in? Given a
compiler version of 3.x.x shouldn't I be able to rely on test results
run on the generic portions by the GCC maintainers themselves and
worry about only the runnable tests?
Is there a specific/sufficient list of tests that I need to validate...?
 

If you can get it going, I'd advise also trying boost, it uses alot of 
language features and will be a good test (although make sure you 
compare results again x86, as some tests do fail on various versions of 
gcc).

Chris


Re: GCC and Floating-Point

2005-05-25 Thread chris jefferson

Vincent Lefevre wrote:


On 2005-05-24 09:04:11 +0200, Uros Bizjak wrote:
 


 I would like to point out that for applications that crunch data
from real world (no infinites or nans, where precision is not
critical) such as various simulations, -ffast-math is something that
can speed up application a lot.
   



But note that even when precision is not critical, you may get
consistency problems or annoying unintuitive side effects. For
instance, see

 http://gcc.gnu.org/bugzilla/show_bug.cgi?id=15134

where

float x = 30.0;
int main()
{
 if ( 90.0/x != 3.0)
   abort();
 return 0;
}

fails with -ffast-math (on x86). I would not recommend it, unless
the user knows all the consequences.

 

On the other hand, in general using != and == on floating point numbers 
is always dangerous if you do not know all the consequences. For 
example, on your above program if I use 30.1 and 90.3, the program fails 
without -ffast-math.


Chris


Re: Sine and Cosine Accuracy

2005-05-30 Thread chris jefferson

Scott Robert Ladd wrote:


Marc Espie wrote:
 


Heck, I can plot trajectories on a sphere that do not follow great circles,
and that extend over 360 degrees in longitude.  I don't see why I should be
restricted from doing that.
   



Can you show me a circumstance where sin(x - 2 * pi) and sin(x + 2 * pi)
are not equal to sin(x)?

Using an earlier example in these threads, do you deny that
sin(pow(2.0,90.0)) == sin(5.15314063427653548) ==
sin(-1.130044672903051) -- assuming no use of
-funsafe-math-optimizations, of course?

 

I would like to say yes, I disagree that this should be true. By your 
argument, why isn't sin(pow(2.0,90.0)+1) == sin(6.153104..)? Also, how 
the heck do you intend to actually calculate that value? You can't just 
keep subtracting multiples of 2*pi from pow(2.0, 90.0) else nothing will 
happen, and if you choose to subtract some large multiple of 2*pi, your 
answer wouldn't end up accurate to anywhere near that many decimal 
places. Floating point numbers approximate real numbers, and at the size 
you are considering, the approximation contains values for which sin(x) 
takes all values in the range [-1,1].


Chris


Re: What is wrong with Bugzilla? [Was: Re: GCC and Floating-Point]

2005-05-30 Thread chris jefferson

Kai Henningsen wrote:


The entire GCC website (of which GCC
Bugzilla is a part) could be the poster child for why developers
should never be allowed to design user interfaces, especially web user
interfaces. I'm sure I'll get flamed for wanting style over substance
or about the proliferation of eye candy, but the GCC web site and it's
   



... which I think are poster childs why non-technical people *usually*  
ought not to be allowed to design web sites.


 


attendent support pages can only be charitably described as eye trash.
Yes, you can find the bug link if you read the main page long enough
and move down the page slowly enough, or if, like me, you fire up
Firefox's find and use that to go quickly to the bug link. But that's
beside the point. At the very least the design of the GCC web site
makes the whole project look like someone who has just discovered the
web and decided to build a few pages. And please don't harp on making
   



To me, it looks *very* professional.

 

I'm sorry, but I felt I couldn't leave this comment alone. The main GCC 
page is badly designed. The logo looks very amateurish, and also try 
exploring the page without actual knowledge. I just tried this. I 
suspect most people on their first visit are here because they want a 
copy of gcc, and it's perhaps reasonable to assume at this point they 
don't know a huge amount, and perhaps don't want to compile from source 
(if they had a copy of gcc, they wouldn't be here :). Yes, I know and 
you know it's not gcc's job to provide that, but I'd look for a copy of 
gcc by typing "gcc" into google, and gcc.gnu.org is where you get to first)


Lets try to get a copy of gcc. Firstly I see something in the top-left 
marked "releases". I click on it. It doesn't mention 4.0, and despite 
reasonable attempts I see no sign of code. Next I see a mention of 4.0.0 
in the main body. After wandering around that link for quite a while I 
find a link to the mirrors page, which is full of source.


Next try documentation, installation. Talks about compiling again. 
Finally, at download, binaries I find what I want. Seeing as I suspect 
that is the link most people want when they first visit, it should 
perhaps be a little more obvious, and in the main body near the top?


Chris


Re: Getting started with contributing

2005-06-09 Thread chris jefferson

Lee Millward wrote:


I'd like to get started with helping to develop GCC but am seeking
some advice from those of you who are regular contributors on the best
approach to adopt.

I have spent the last few weeks reading the gcc-patches mailing list
and the documentation available on GCC from the Wiki and various other
documents I have found on the Internet to try and get a feel for how
everything works. I also have the latest CVS code and have spent time
reading through the source to become familiar with the coding
conventions in use. I've read through the "beginner" GCC projects on
the website and would like to hear peoples opinion on how useful
submitting patches for some of the these projects would be. Some of
the work being carried out and posted on the gcc-patches mailing list
makes those projects seem insignificant in comparision.

Thanks for your time in reading this, 
Lee.
 

Speaking as someone who started contributing the gcc (actually 
libstdc++-v3) quite recently, I wouldn't worry too much that anything 
you think you can do seems inconsequental. If you wander around bugzilla 
and try to fix bugs, and also look at bits of code related to things you 
look at, it won't take very long to find some minor annoying bugs that 
have been hanging around but no-one has got around to fixing, or some 
code that looks a little crusty and hacked which could do with a spot of 
cleaning.


Also there are quite a few bits of code which aren't ever tested in the 
testsuite, and really all code should be. Writing testcases isn't the 
most exciting job in the world, but it's an easy way to get some code 
written, and I'm fairly sure no-one will reject new test cases which 
stress untested pieces of code. Also, you are in the useful position of 
looking at code with a new eye, take the opportunity to convince 
yourself algorithms actually work correctly in all cases, and don't have 
any random unnessasary overheads. While it's not as exciting as writing 
a new uber-super-SRA-loop-floating point-tree-SSE3 optimisation pass, a 
spot of cleaning up old corners and clearing out old dusty cobwebs I'm 
sure will be useful, and provides a method to get deeper into gcc.


Chris


Re: Fixing Bugs (Was: A Suggestion for Release Testing)

2005-06-14 Thread chris jefferson

Scott Robert Ladd wrote:


Richard Guenther wrote:
 


Take a break and come back with results of actual work done,
this impresses people a lot more than (repeated) ranting about
gcc development in general.
   



I have worked on GCC; not much, and probably trivial in your eyes,
but practical work nonetheless. To trivialize contributions is a great
way of driving away potential contributors.

I would like to improve floating-point in GCC; doing so scratches my
personal itch. My silly idea is to determine the best approach
*through discussion*.

 

One thing I have come across, both in gcc and in other projects, is that 
often discussion is not the best option, but instead just writing some 
code is better.


It's very easy to have discussions go around in circles about if option 
a or option b is better, and which will lead to slowdowns, or intrusive 
changes, or whatever. It's very hard to know how well something will 
actually work, and if it will be possible, until it's actually been 
written. While it's briefly annoying to write code which then isn't used 
the first time you do it, I've quickly learned it's faster and easier 
than extensive discussions, and most good code will go through 3 or 4 
iterations before it finally settles, and need a whole bundle of tests 
writing, so writing an initial test version is not actually that big a 
time investment compared to the total amount of time something will 
take. Working code is also of course by far the most convincing argument 
:).


I have 4 completely different implementations of std::tr1::tuple lying 
around somewhere, obviously only one was actually used, but the only 
real way to know which would be best was to just write them and see how 
they looked and worked.


Chris


Re: Some notes on the Wiki

2005-07-11 Thread chris jefferson
Gabriel Dos Reis wrote:

>Daniel Berlin <[EMAIL PROTECTED]> writes:
>
>| On Mon, 11 Jul 2005, Nicholas Nethercote wrote:
>| 
>| > On Mon, 11 Jul 2005, Daniel Berlin wrote:
>| >
>| >>> Also, a web-browser is much slower than an info-browser,
>| >>> especially when doing searchs.
>| >> You must be close to the only user i've met who uses the info
>| >> browser :)
>| >
>| > I use it.  Info pages suck in many ways, but they're fast to load
>| > from an xterm, fast to search, and even faster when you know where
>| > they are in the docs (eg. I find myself looking at the GCC C
>| > extensions quite often, and I can get there very quickly).
>| 
>| Most people i've met can't undertand the commands for info (pinfo is
>| nicer in this regard).
>
>maybe the conclusion to draw is that you've met some special people in
>a small part of the community.
>
>  
>
I just had a quick quiz in the C++ IRC channel I was in, and very few
people there like info, and very few are comfortable using it. There was
a general agreement HTML, PDF and docbook are the best ways to recieve
documentation.

Chris


Re: Pointers in comparison expressions

2005-07-12 Thread chris jefferson
Mirco Lorenzoni wrote:

>Can a pointer appear in a C/C++ relational expression which doesn't test the 
>equality (or the inequality) of that pointer with respect to another pointer? 
>For example, are the comparisons in the following program legal code?
>
>/* test.c */
>#include 
>
>int main(int argc, char* argv[])
>{
>   void *a, *b;
>   int aa, bb;
>
>   a = &aa;
>   b = &bb;
>   
>  
>
Actually I'm fairly certain at this point this program stops being legal
code, as (I believe) you can only compare pointers which are from the
same allocation (be that an array, malloc, etc).

However, comparing pointers with < is something I do all the time when
writing various kinds of algorithms. For what reason would you want to
see it warned about?

Chris



Re: Problems on Fedora Core 4

2005-07-20 Thread chris jefferson
This is not the correct mailing list for help using gcc, it is for help
developing gcc. Use gcc-help in future please.
Michael Gatford wrote:

>
>
>std::map::const_iterator functionIterator =
> quickfindtag.find(funcname);

put "typename" at the beginning of this line.

Chris


When is it legal to compare any pair of pointers?

2005-09-13 Thread chris jefferson
I realise that according to the C++ standard it isn't legal to compare
two pointers which are not from the same array. Is anyone aware of
anything in g++ which would actually forbid this, and if there is any
way of checking if will be valid?

I want to be able to perform two main operations. Firstly to compare any
pair of pointers with ==, and also to write code like:

template
  bool
  in_range(T* begin, T* end, T* value)
  { return (begin <= value) != (end <= value); }

Where value may be a pointer not from the same array as begin and end.

Apologises for sending this question to the main gcc list, but I want to
submit such code to the debugging part of libstdc++-v3, and wanted to
check if any optimisations may make use the fact comparing pointers from
different arrays is undefined.

Thank you,

Chris



Re: How set an iterator to NULL

2005-09-20 Thread chris jefferson
Michael Cieslinski wrote:

>Since last week this small program does no longer compile.
>My question are:
>Is this correct or should I file a bug report?
>How is it possible to initialize an iterator to NULL?
>
>  
>
A patch was recently submitted specifically to stop this working, as it
shouldn't.

there isn't a general way of setting iterators to NULL (some people
believe there should be). The usual things to say are either a) don't
create the iterators until you have somewhere to point them, or b) often
(but not always) it is natural to use the iterator returned by
list.end() as a "NULL" iterator.

Chris


Re: using multiple trees with subversion

2005-10-20 Thread chris jefferson
Mike Stump wrote:

> On Oct 19, 2005, at 2:56 AM, François-Xavier Coudert wrote:
>
>> Or am I the only person to find that disk is expensive (or working 
>> on his own hardware, maybe)?
>
>
> A checkout costs US$0.50.  This is around 2.6x more expensive than a 
> cvs checkout.  Check around locally, maybe you can find `throwaways' 
> in the 4GB-15GB range.
>
>
> throwaways - what a person that likes to upgrade every 3-5 years 
> throws out, because its too slow/small to do anything with.


Could you just find one to fit in my iBook please. I'll send you my
address. Thanks.

Chris


Re: [C++] Should the complexity of std::list::size() be O(n) or O(1)?

2005-11-23 Thread chris jefferson
聂久焘 wrote:
> The C++ standard said Container::size() should have constant complexity
> (ISO/IEC 14882:1998, pp. 461, Table 65), while the std::list::size() in
> current STL of GCC is defined as { std::distance(begin(), end()); }, whose
> complexiy is O(n).
>  
> Is it a bug?
>
>   
This question would be better asked on [EMAIL PROTECTED], the
mailing list of gcc's implementation of the C++ standard library.

This question comes up every so often, in "offical standard speak", the
word "should" has a specific meaning, which is that an implementation is
supposed to do something unless there is a good reason not to.

The reason that size() is O(n) is to allow some of the splice functions
to be more efficient. Basically it's a tradeoff between fast splicing or
fast size.

Note that empty() is O(1), as required by the standard, so if thats what
you want, you should use that.


Re: PR 25512: pointer overflow defined?

2005-12-21 Thread chris jefferson
Robert Dewar wrote:
> Richard Guenther wrote:
>
>> On Wed, 21 Dec 2005, Andrew Haley wrote:
>>
>>  
>>
>>> Richard Guenther writes:
>>> > > The problem in this PR is that code like in the testcase (from
>>> > OpenOffice) assumes that pointer overflow is defined.  As the
>>> > standard does not talk about wrapping pointer semantics at all (at
>>> > least I couldn't find anything about that), how should we treat
>>> > this?
>>>
>>> Look at Section 6.5.6, Para 8.  The code is undefined.
>>>   
>>
>> This talks about pointers that point to elements of an array object.
>> It does not talk about doing arithmetic on arbitrary pointer
>> (constants),
>> which is what the code does.
>>
> Right, but that's the point. "doing arithmetic on arbitrary pointer"
> values is
> not defined, it is not even defined to compare two pointers pointing
> to two
> different objects.
>
While that is true according to the standard, I believe that on most
systems you can compare any two pointers. In particular, the C++
standard does require a total ordering on pointers, and at the moment
that is implemented for all systems by just doing "a < b" on the two
pointers.

Chris


Re: use of changes in STL library

2006-05-25 Thread Chris Jefferson

On 5/25/06, Marek Zuk <[EMAIL PROTECTED]> wrote:

Hi
thanks a lot for your reply.
I'm not sure if you understood what I meant...

I'm a student of the Faculty of Mathematics & Computer Science at the
Warsaw University of Technology. I'm in my final year of my studies
(MSc) and I'm working on  my final project.
The the subject of my project is: "Enhancing associative containers
(map, multimap, set and multiset) in STL with the possibility of
choosing the way of their implmentation".
So I'm going to develop libstdc++.

Now associative containers in STL are implemented by use of red-black
trees. What I want to do, is to enable the choice of implementation of
these containers by adding one parameter to the templates, so that the
containers could by be built by use of  b-trees, just vectors or others
structures.

So my question is:
How to make changes in libstdc++ and how to test these changes in the
easiest way?

Thank you very much for your help.



My personal advice for doing this would be as follows.

1) Learn how to download, compile and install into a custom directory
all of gcc. You probably want to look at the options to only compile
certain languages (you only want C and C++)

2) Look in the libstdc++-v3 directory. I think everything you want
will be in the include directory. The actual headers you include
(, , etc.) are in the std directory, for example vector
is called std_vector.h.

3) The actual implementations of the algorithms are in bits/. Explore
around in here to find the implementations.

When you have changed some, recompile by going into the libstdc++-v3 directory.




Marek Zuk


Paolo Bonzini wrote:
>
>>> Could you write us what command we should use?
>>> We'd like to emphasize that we don't want to recompile whole gcc on our
>>> computer, we just want to make use of changes we did in the repository.
>>
>> Short answer is you can't. The gcc build system doesn't support
>> building just the target libraries. You're going to have to build the
>> whole thing.
>
> You can build GCC only once, and then modify libstdc++.  If you don't
> want to install GCC, you can install libstdc++ with
>
>   make install-libstdc++-v3
>
> Paolo
>



Re: PATCH: TR1 unordered associative containers

2005-02-17 Thread Chris Jefferson
Joe Buck wrote:
On Thu, Feb 17, 2005 at 03:47:03PM -0800, Matt Austern wrote:
 

I'm sure there are still lots of horrible bugs, which will only be  
found with a more complete test suite.  But the core functionality  
works, and at this point I think it'll improve faster in the CVS server  
than sitting on my hard disk.

OK to commit to mainline?
   

A namespace purity nitpick:
You define a macro named tr1_hashtable_define_trivial_hash.  Shouldn't
that be __tr1_hashtable_define_trivial_hash or something similar?
 

Having just read through, everything seems very reasonable to me, but I 
think there is a general need a liberal splattering of __ here, there 
and everywhere :) It would be very nice if we could stop having to 
prefix everything with __ and just pop it into a namespace instead, but 
fixing that is perhaps something for another day :)

Chris


Re: Inlining and estimate_num_insns

2005-02-27 Thread chris jefferson
I take it as a lame property of our current inlining heuristics
and function size estimate that for
  inline int foo { return 0; }
  int bar { return foo(); }
the size estimate of foo is 2, that of bar is initially 13 and 5
after inlining.  3.4 did better in that it has 1 for foo, 12 for bar
before and 3 after inlining.  Though one could equally argue that
3.4 had an abstraction penalty of 3.0 while 4.0 now has 2.5.
It is not hard to believe that for C code this does not matter much,
but for C++ it effectively lowers our inlining limits as at least
things like accessor methods are very common, and if you think of
template metaprograms, with the old scheme we're just loosing.
While so far the inliner has been coping, it is worth pointing out that 
more abstractions working their way into libstdc++, for example the next 
version of libstdc++ now has a shared implementation of sorting with and 
without a predicate, by using the predicate "bool less(const T& a, const 
T&b) { a < b;}" (not actual implementation!) when no predicate is specified.

In the near future more such code may (hopefully) be introduced to 
allow the same implementation of sort deal with more specialiastions 
without having to duplicate large sections of code. Assuming the inliner 
does its work,  then in most cases once inling has been performed the 
resulting code can be smaller than the function call it is replacing. I 
would personally say that it is very important that the inliner is 
capable of realising when a number of very small nested functions will 
collapse to almost no code at all.

Chris


Re: C++98/C++11 ABI compatibility for gcc-4.7

2012-06-16 Thread Chris Jefferson

On 15/06/12 21:45, Gabriel Dos Reis wrote:

On Fri, Jun 15, 2012 at 3:12 PM, James Y Knight  wrote:


IMO, at the /very least/, libstdc++ should go ahead and change std::string
to be the new implementation. Once std::string is ABI-incompatible between
the modes, there's basically no chance that anyone would think that
linking things together from the two modes is a good thing to try, for
more than a couple minutes.


Agreed.


While I realise it doesn't fix all problems (for example, with return 
values), is there any reason the C++11 ABI-incompatable types have not 
been put into a seperate inline namespace, so compiling fails at 
link-time rather than at run-time?


Chris


gcc-in-cxx: Garbage Collecting STL Containers

2008-06-25 Thread Chris Jefferson
Could someone point me towards what is necessary to add STL containers
to the garbage collector?

One big problem with garbage collecting in C++ is the need to run
destructors. If the (I believe very reasonable) decision is made to
require that running destructors is not necessary for garbage
collected types, then my experience is doing gc is very easy, and for
garbage collection it's easy to just look at the internal members
which all behave "sensibly".

Stripping away all the C++isms, and assuming that we use the default
allocator which uses malloc, then a std::vector is just a struct

struct vector
{
  T* begin;
  T* finish;
  T* end_of_storage;
};

Where we increment finish whenever we add something to the vector,
until finish = end_of_storage, at which point new memory is allocated,
the data is copied across, and then the old memory is freed. I can't
think of any other (or any simpler) way to construct a variable sized
container, in either C or C++.

Other containers are similar.

Chris