On Nov 5, 2006, at 08:46, Kenneth Zadeck wrote:
The thing is that even as memories get larger, something has to give.
There are and will always be programs that are too large for the most
aggressive techniques and my proposal is simply a way to gracefully
shed
the most expensive techniques as
Most people aren't waiting for compilation of single files.
If they do, it is because a single compilation unit requires
parsing/compilation of too many unchanging files, in which case
the primary concern is avoiding redoing useless compilation.
The common case is that people just don't use the -
On Nov 11, 2006, at 03:21, Mike Stump wrote:
The cost of my assembler is around 1.0% (ppc) to 1.4% (x86)
overhead as measured with -pipe -O2 on expr.c,. If it was
converted, what type of speedup would you expect?
Given that CPU usage is at 100% now for most jobs, such as
bootstrapping GCC,
On Nov 13, 2006, at 21:27, Dave Korn wrote:
To be fair, Mike was talking about multi-core SMP, not threading
on a single
cpu, so given that CPU usage is at 100% now for most jobs, there is
an Nx100%
speedup to gain from using 1 thread on each of N cores.
I'm mostly building GCC on multip
On Nov 14, 2006, at 12:49, Bill Wendling wrote:
I'll mention a case where compilation was wickedly slow even
when using -j#. At The MathWorks, the system could take >45 minutes
to compile. (This was partially due to the fact that the files were
located on an NFS mounted drive. But also because C
On Dec 3, 2006, at 12:44, Kaveh R. GHAZI wrote:
In case i370 support is revived or a format not using base==2 is
introduced, I could proactively fix the MPFR precision setting for any
base that is a power of 2 by multiplying the target float precision by
log2(base). In the i370 case I would mu
On Dec 4, 2006, at 20:19, Howard Hinnant wrote:
If that is the question, I'm afraid your answer is not accurate.
In the example I showed the difference is 2 ulp. The difference
appears to grow with the magnitude of the argument. On my systems,
when the argument is DBL_MAX, the difference
On Dec 20, 2006, at 09:38, Bruno Haible wrote:
But the other way around? Without -fwrapv the compiler can assume more
about the program being compiled (namely that signed integer overflows
don't occur), and therefore has more freedom for optimizations. All
optimizations that are possible with -f
On Dec 13, 2006, at 17:09, Denis Vlasenko wrote:
# g++ -c -O3 toto.cpp -o toto.o
# g++ -DUNROLL -O3 toto.cpp -o toto_unroll.o -c
# size toto.o toto_unroll.o
textdata bss dec hex filename
525 8 1 534 216 toto.o
359 8 1 368 170 to
On Dec 31, 2006, at 19:13, Daniel Berlin wrote:
Note the distinct drop in performance across almost all the benchmarks
on Dec 30, including popular programs like bzip2 and gzip.
Not so.
To my eyes, the specint 2000 mean went UP by about 1% for the
base -O3 compilation. The peak enabled more un
On Jan 1, 2007, at 12:16, Joseph S. Myers wrote:
For a program to be secure in the face of overflow, it will
generally need
explicit checks for overflow, and so -fwrapv will only help if such
checks
have been written under the presumption of -fwrapv semantics.
Yes, but often people do writ
On Jan 1, 2007, at 21:14, Ian Lance Taylor wrote:
[...]
extern void bar (void);
void
foo (int m)
{
int i;
for (i = 1; i < m; ++i)
{
if (i > 0)
bar ();
}
}
Here the limit for i without -fwrapv becomes (1, INF]. This enables
VRP to eliminate the test "i > 0". With -fwra
On Mar 19, 2007, at 05:44, François-Xavier Coudert wrote:
I have the three following questions, probably best directed to
middle-end experts and Ada maintainers:
* How can I know the longest float type? My first patch uses the
long_double_type_node unconditionally, but it surely isn't a generic
In 3.1, you write:
The statistics gathered over the programs mentioned in the
previous section show that about 43% of all statements contain 0 or
more
register operands
I'd assume 100% contain 0 or more register operands.
Did you mean 43% contain 1 or more?
-Geert
On Apr 27, 2007, at 06:12, Janne Blomqvist wrote:
I agree it can be an issue, but OTOH people who care about
precision probably 1. avoid -ffast-math 2. use double precision
(where these reciprocal instrs are not available). Intel calls it -
no-prec-div, but it's enabled for the "-fast" catch
On Mar 7, 2005, at 12:40, Giovanni Bajo wrote:\
But how are you proposing to handle the fact that the C++ FE needs to
fold
constant expressions (in the ISO C++ sense of 'constant expressions)?
For
instance, we need to fold "1+1" into "2" much before gimplification.
Should
a part of fold() be ext
On Mar 9, 2005, at 03:18, Duncan Sands wrote:
if the Ada front-end has an efficient, accurate implementation
of x^y, wouldn't it make sense to move it to the back-end
(__builtin_pow) so everyone can benefit?
Does not have it yet. Current implementation is reasonably accurate,
but not very fast. How
Hi Per,
Of the three proposals:
[...]
The ideal solution I think is for Ada to use line-map's
source_location for Sloc in its lexer.
[...]
translate Sloc integers to source_location
when we translate the Ada intermal format to Gcc trees.
[...]
the location_t in the shared Gcc should be a language-
On Mar 21, 2005, at 02:54, Nick Burrett wrote:
This seems to be a reoccurance of PR5677.
I'm sorry, but I can't see any way this is related, could you elaborate?
for Aligned_Word'Alignment use
- Integer'Min (2, Standard'Maximum_Alignment);
+ Integer'Min (4, Standard'Maximum_Alignment);
On Mar 21, 2005, at 11:02, Nick Burrett wrote:
OK, but if I don't apply the patch, GNAT complains that the alignment
should be 4, not 2 and compiling ceases.
Yes, this is related to PR 17701 as Arno pointed out to me in a private
message.
Indeed, the patch you used works around this failure and c
On Mar 22, 2005, at 22:09, Per Bothner wrote:
Of course that's in the eye of the beholder. I think a local
translation
is cleaner and more robust/safer than a global opaque type/call-back.
OK, let's go with that approach then.
-Geert
%cat LAST_UPDATED
Sat Mar 26 21:31:28 EST 2005
Sun Mar 27 02:31:28 UTC 2005
stage1/xgcc -Bstage1/ -B/opt/gcc-head//powerpc-apple-darwin7.8.0/bin/
-c -g -O2 -mdynamic-no-pic -DIN_GCC -W -Wall -Wwrite-strings
-Wstrict-prototypes -Wmissing-prototypes -pedantic -Wno-long-long
-Wno-variadic-macro
On Apr 1, 2005, at 16:36, Mark Mitchell wrote:
In fact, I've long said that GCC had too many knobs.
(For example, I just had a discussion with a customer where I
explained that the various optimization passes, while theoretically
orthogonal, are not entirely orthogonal in practice, and that truni
As far as I can seem from this patch, it rounds incorrectly.
This is a problem with the library version as well, I believe.
The issue is that one cannot round a positive float to int
by adding 0.5 and truncating. (Same issues with negative values
and subtracting 0.5, of course). This gives an error
On Apr 7, 2005, at 10:12, Steve Kargl wrote:
On Thu, Apr 07, 2005 at 08:08:15AM -0400, Geert Bosch wrote:
As far as I can seem from this patch, it rounds incorrectly.
This is a problem with the library version as well, I believe.
Which library?
libgfortran, or whatever is used to implement NINT
On Apr 7, 2005, at 13:27, Steve Kargl wrote:
Try -fdump-parse-tree. You've given more digits in y than
its precision. This is permitted by the standard. It appears
the gfortran frontend is taking y = 0.49 and the closest
representable nubmer is y = 0.5.
So, why does the test y < 0.5 yield tr
On Apr 7, 2005, at 13:54, Steve Kargl wrote:
I missed that part of the output. The exceeding
long string of digits caught my attention. Can
you submit a PR?
These routines should really be done as builtins, as almost all
front ends need this facility and we'd fit in with the common
frameworks for
On May 30, 2005, at 16:50, Florian Weimer wrote:
I'll try to phrase it differently: If you access an object whose bit
pattern does not represent a value in the range given by
TYPE_MIN_VALUE .. TYPE_MAX_VALUE of the corresponding type, does this
result in erroneous execution/undefined behavior?
On May 30, 2005, at 02:57, Victor STINNER wrote:
I'm using gcc "long long" type for my calculator. I have to check
integer overflow. I'm using sign compare to check overflow, but it
doesn't work for 10^16 * 10^4 :
1 * 1
I see your question went unanswered, however I do th
On Jun 3, 2005, at 09:02, Florian Weimer wrote:
It probably makes sense to turn on -fwrapv for Ada because even
without -gnato, the behavior is not really undefined:
| The reason that we distinguish overflow checking from other kinds of
| range constraint checking is that a failure of an overfl
This is http://gcc.gnu.org/PR22319.
On Jul 6, 2005, at 06:17, Andreas Schwab wrote:
Andreas Jaeger <[EMAIL PROTECTED]> writes:
Building ada with the patch for flag_wrapv fails now with a new
error:
+===GNAT BUG
DETECTED==+
| 4.1.0 20050
On Oct 27, 2005, at 14:12, Eric Botcazou wrote:
I'm under the impression that it's worse on IA-64 because of the
"infinite
precision", but I might be wrong.
Fused multiply-add always uses "infinite precision" in the intermediate
result. Only a single rounding is performed at the end. We real
On Oct 27, 2005, at 17:19, Andreas Schwab wrote:
I think this is what the FP_CONTRACT pragma is supposed to provide.
Yes, but it seems there is no way in the middle end / back end to
express this. Or am I just hopelessly behind the times? :)
-Geert
On Oct 27, 2005, at 17:25, Steve Ellcey wrote:
It would be easy enough to add an option that turned off the use of
the
fused multiply and add in GCC but I would hate to see its use
turned off
by default.
Code that cares should be able to express barriers across which no
contraction is poss
On Nov 14, 2005, at 19:59, Jim Wilson wrote:
Joel Sherrill <[EMAIL PROTECTED]> wrote:
s-auxdec.ads:286:13: alignment for "Aligned_Word" must be at least 4
Any ideas?
I'm guessing this is because ARM sets STRUCTURE_SIZE_BOUNDARY to 32
instead of 8, and this confuses the Ada front end.
Note
On Nov 15, 2005, at 18:11, Laurent GUERBY wrote:
What about moving s-auxdec from ada/Makefile.rtl
GNATRTL_NONTASKING_OBJS
into EXTRA_GNATRTL_NONTASKING_OBJS so it can be set for VMS targets
only
in ada/Makefile.in?
This is not ideal, because some people are migrating from DEC Ada
to GNAT o
On Nov 17, 2005, at 21:33, Dale Johannesen wrote:
When I arrived at Apple around 5 years ago, I was told of some recent
measurements that showed the assembler took around 5% of the time.
Don't know if that's still accurate. Of course the speed of the
assembler
is also relevant, and our stubs a
> On Oct 1, 2015, at 11:34 AM, Alexander Monakov wrote:
>
> Can you expand on the "etc." a bit, i.e., may the compiler ...
>
> - move a call to a "const" function above a conditional branch,
>causing a conditional throw to happen unconditionally?
No, calls may only be omitted, not moved.
On Jun 2, 2014, at 10:06 AM, Vincent Lefevre wrote:
> I've looked at
>
> https://gcc.gnu.org/wiki/FloatingPointMath
>
> and there may be some mistakes or missing info.
That’s quite possible. I created the page many years ago, based on my
understanding of GCC at that time.
>
> First, it is
On Jul 20, 2014, at 5:55 PM, Jakub Jelinek wrote:
> So, what versioning scheme have we actually agreed on, before I change it in
> wwwdocs? Is that
> 5.0.0 in ~ April 2015, 5.0.1 in ~ June-July 2015 and 5.1.0 in ~ April 2016,
> or
> 5.0 in ~ April 2015, 5.1 in ~ June-July 2015 and 6.0 in ~ Apri
On Jul 23, 2014, at 10:56 AM, Thomas Mertes wrote:
> One such feature is the detection of signed integer overflow. It is
> not hard, to detect signed integer overflow with a generated C
> program, but the performance is certainly not optimal. Signed integer
> overflow is undefined behavior in C
On Sep 9, 2011, at 04:17, Jakub Jelinek wrote:
> I'd say they should be optimization barriers too (and at the tree level
> they I think work that way, being represented as function calls), so if
> they don't act as memory barriers in RTL, the *.md patterns should be
> fixed. The only exception s
On Sep 11, 2011, at 10:12, Andrew MacLeod wrote:
>> To be honest, I can't quite see the use of completely unordered
>> atomic operations, where we not even prohibit compiler optimizations.
>> It would seem if we guarantee that a variable will not be accessed
>> concurrently from any other thread,
On Sep 11, 2011, at 15:11, Jakub Jelinek wrote:
> On Sun, Sep 11, 2011 at 03:00:11PM -0400, Geert Bosch wrote:
>> Also, for relaxed order atomic operations we would only need a single
>> fence between two accesses (by a thread) to the same atomic object.
>
> I'm not aw
On Sep 12, 2011, at 03:02, Paolo Bonzini wrote:
> On 09/11/2011 09:00 PM, Geert Bosch wrote:
>> So, if I understand correctly, then operations using relaxed memory
>> order will still need fences, but indeed do not require any
>> optimization barrier. For memory_order_seq_
On Sep 12, 2011, at 19:19, Andrew MacLeod wrote:
> Lets simplify it slightly. The compiler can optimize away x=1 and x=3 as
> dead stores (even valid on atomics!), leaving us with 2 modification orders..
> 2,4 or 4,2
> and what you are getting at is you don't think we should ever see
> r1==
On Sep 13, 2011, at 08:08, Andrew MacLeod wrote:
> On 09/12/2011 09:52 PM, Geert Bosch wrote:
>> No that's false. Even on systems with nice memory models, such as x86 and
>> SPARC with a TSO model, you need a fence to avoid that a write-load of the
>> same location i
On Feb 5, 2012, at 11:08, James Courtier-Dutton wrote:
> But, r should be
> 5.26300791462049950360708478127784... or
> -1.020177392559086973318201985281...
> according to wolfram alpha and most arbitrary maths libs I tried.
>
> I need to do a bit more digging, but this might point to a bug in th
On Aug 12, 2009, at 10:32, Joel Sherrill wrote:
Hi,
GNAT doesn't build for arm-rtems on 4.4.x or
SVN (PR40775). I went back to 4.3.x since I
remembered it building.
I have run the ACATS on an ep7312 target and
get a number of generic test failures that
don't look RTEMS specific. Has anyone r
On Aug 21, 2009, at 18:40, Paul Smedley wrote:
Hi All,
I'm wanting to update the GNU ADA compiler for OS/2... I'm currently
building GCC 4.3.x and 4.4.x on OS/2 (C/C++/fortran) but for ADA
configure complains about not finding gnat. The problem is that the
only gnat compiled for OS/2 was year
If you pass -v to gnatmake, it will output the gcc invocations.
This should be sufficient to find the problem.
Basically, just go to the directory containing c35502i.adb, and
execute the gnatmake command as listed below, with -v added in.
If you only have the 35502i.ada file available, use "gnatc
On Sep 19, 2009, at 18:02, Steven Bosscher wrote:
* GDB test suite should pass with -O1
Apparently, the current GDB test suite can only work at -O0,
because code reorganization messes up the scripting.
-Geert
On Nov 23, 2009, at 10:17, Ian Bolton wrote:
> Regardless of the architecture, I can't see how an unbalanced tree would
> ever be a good thing. With a balanced tree, you can still choose to
> process it in either direction (broad versus deep) - whichever is better
> for your architecture - but,
On Feb 21, 2010, at 06:18, Steven Bosscher wrote:
> My point: gcc may fail to attract users (and/or may be losing users)
> when it tries to tailor to the needs of minorities.
>
> IMHO it would be much more reasonable to change the defaults to
> generate code that can run on, say, 95% of the compu
On Feb 21, 2010, at 09:58, Joseph S. Myers wrote:
> On Sun, 21 Feb 2010, Richard Guenther wrote:
>>> The biggest change we need to make for x86 is to enable SSE2,
>>> so we can get proper rounding behavior for float and double,
>>> as well as significant performance increases.
>>
>> I think Josep
On Feb 21, 2010, at 17:42, Erik Trulsson wrote:
> Newer compilers usually have better generic optimizations that are not
> CPU-dependent. Newer compilers also typically have improved support
> for new language-features (and new languages for that matter.)
This is exactly where CPU dependence com
On Feb 21, 2010, at 12:34, Joseph S. Myers wrote:
> Correct - I said API, not ABI. The API for C programs on x86 GNU/Linux
> involves FLT_EVAL_METHOD == 2, whereas that on x86 Darwin involves
> FLT_EVAL_METHOD == 0 and that on FreeBSD involves FLT_EVAL_METHOD == 2
> but with FPU rounding prec
On Feb 21, 2010, at 07:13, Richard Guenther wrote:
> The present discussion is about defaulting to at least 486 when not
> configured for i386-linux. That sounds entirely reasonable to me.
I fully agree with the "at least 486" part. However,
if we only change the default once every 20 years, it
On Feb 21, 2010, at 20:57, Joseph S. Myers wrote:
> I know some people have claimed (e.g. glibc bug 6981) that you can't
> conform to Annex F when you have excess precision, but this does not
> appear to be the view of WG14.
That may be the case, but I really wonder how much sense it
can make t
On Mar 29, 2010, at 13:19, Jeroen Van Der Bossche wrote:
> 've recently written a program where taking the average of 2 floating
> point numbers was a real bottleneck. I've looked into the assembly
> generated by gcc -O3 and apparently gcc treats multiplication and
> division by a hard-coded 2 li
On Mar 29, 2010, at 16:30, Tim Prince wrote:
> gcc used to have the ability to replace division by a power of 2 by an fscale
> instruction, for appropriate targets (maybe still does).
The problem (again) is that floating point multiplication is
just too damn fast. On x86, even though the latency
Hi Richard,
Great to see that you're addressing this issue. If I understand
correctly,
for RTL all operations are always wrapping, right?
I have been considering adding "V" variants for operations that trap on
overflow. The main reason I have not (yet) pursued this, is the daunting
task of te
On Mar 6, 2009, at 09:15, Joseph S. Myers wrote:
It looks like only alpha and pa presently have insn patterns such as
addvsi3 that would be used by the present -ftrapv code, but I expect
several other processors also have instructions that would help in
overflow-checking code. (For example, Pow
On Mar 6, 2009, at 04:11, Richard Guenther wrote:
I didn't spend too much time thinking about the trapping variants
(well, I believe it isn't that important ;)). But in general we would
have to expand the non-NV variants via the trapping expanders
if flag_trapv was true (so yeah, combining TUs
On Mar 6, 2009, at 12:22, Joseph S. Myers wrote:
If you add new trapping codes to GENERIC I'd recommend *not* making
fold()
handle them. I don't think much folding is safe for the trapping
codes
when you want to avoid either removing or introducing traps. Either
lower
the codes in gimpli
On Apr 12, 2009, at 13:29, Oliver Kellogg wrote:
On Tue, 4 Mar 2003, Geert Bosch wrote:
[...]
Best would be to first post a design overview,
before doing a lot of work in order to prevent spending time
on implementing something that may turn out to have fundamental
problems.
I've d
On Apr 20, 2009, at 14:45, Oliver Kellogg wrote:
It would be best to first contemplate what output a single
invocation of the compiler, with multiple compilation units
as arguments, should produce.
For an invocation
gnat1 a.adb b.adb c.adb
, the files a.{s,ali} b.{s,ali} c.{s,ali} are produced
On May 31, 2010, at 14:25, Mark Mitchell wrote:
> That doesn't necessarily mean that we have to use lots of C++ features
> everywhere. We can use the C (almost) subset of C++ if we want to in
> some places. As an example, if the Fortran folks want to use C in the
> Fortran front-end, then -- exc
On Jun 1, 2010, at 17:41, DJ Delorie wrote:
> It assumes your editor can do block-reformatting while preserving the
> comment syntax. I've had too many // cases of Emacs guessing wrong //
> and putting // throughout a reformatted // block.
With Ada we have no choice, and only have -- comments.
On Jun 23, 2010, at 22:53, Tomás Touceda wrote:
> I'm starting to dig a little bit in what gcc does to protect the stack
> from overflows and attacks of that kind. I've found some docs and
> patches, but they aren't really up to date. I thought I could get some
> diffs for the parts that manage t
On Oct 8, 2010, at 18:18, Manuel López-Ibáñez wrote:
> It is possible to do it quite fast. Clang implements all warnings,
> including Wuninitialized, in the FE using fast analysis and they claim
> very low false positives.
> However, there are various reasons why it has not been attempted in GCC:
On Oct 31, 2010, at 15:33, Steven Bosscher wrote:
> The argument against disabling java as a default language always was
> that there should be at least one default language that requires
> non-call exceptions. I recall testing many patches without trouble if
> I did experimental builds with just
On Nov 1, 2010, at 00:30, Joern Rennecke wrote:
>> Feel free to enable Ada. Builds and tests faster than Java,
>> and is known to expose many more middle end bugs, including
>> ones that require non-call exceptions.
>
> But to get that coverage, testers will need to have gnat installed.
> Will th
On Nov 19, 2010, at 11:53, Eric Botcazou wrote:
>> Yes, if all the people who want only one set of libraries agree on what
>> that set shall be (or this can be selected with existing configure flags),
>> this is the simplest way.
>
> Yes, this can be selected at configure time with --with-cpu and
On Feb 15, 2006, at 11:44, John David Anglin wrote:
I missed this "new" define and will try it. Perhaps, this should
take account of the situation when TARGET_SOFT_FLOAT is true. For
example,
When emulating all software floating-point, we still don't want to
use 128-bit floats. The whole idea
On Feb 15, 2006, at 13:28, John David Anglin wrote:
Understood. My question was what should the define for
WIDEST_HARDWARE_FP_SIZE be when generating code for a target
with no hardware floating point support (e.g., when
TARGET_SOFT_FLOAT is true)?
Practically, I'd say it should be 64, as it'
On Mar 16, 2006, at 05:09, Robert Dewar wrote:
Not quite right. If you have an uninitialized variable, the value is
invalid and may be out of bounds, but this is a bounded error
situation,
not an erroneous program. So the possible effects are definitely NOT
unbounded, and the use of such valu
On Mar 16, 2006, at 10:43, Richard Guenther wrote:
Uh - what do you expect here?? Does the Ada standard
require a out-of-range exception upon the first use of N?
In this case, the frontend needs to insert a proper check.
You cannot expect the middle-end to avoid the above
transformation, so thi
On Apr 3, 2006, at 09:34, Waldek Hebisch wrote:
2) Adjusting gpc development model. In particular, gpc uses rather
short
feedback loop: new features are released (as alphas) when they
are ready.
This is possible because gpc uses stable backend, so that users are
exposed only to fron
On May 23, 2006, at 11:21, Jon Smirl wrote:
A new calling convention could push two return addresses for functions
that return their status in EAX. On EAX=0 you take the first return,
EAX != 0 you take the second.
This seems the same as passing an extra function pointer
argument and calling t
On May 25, 2006, at 13:21, Jon Smirl wrote:
jmp *4($esp)
This is slightly faster than addl, ret.
The point is that this is only executed in the error case.
But my micro scale benchmarks are extremely influenced by changes in
branch prediction. I still wonder how this would perfor
On Feb 12, 2005, at 12:57, Nathan Sidwell wrote:
Well, it depends on the FE's language definition :) For C and C++ the
above is not a constant-expression as the language defines it. I can
see a couple of obvious ways to deal with this with an FE specific
constant expression evaluator,
1) during p
On Feb 12, 2005, at 14:57, Nathan Sidwell wrote:
I entirely agree. Unfortunately what we have now is not that --
fold is doing both optimization and (some) C & C++ semantic stuff.
Your proposal to have the tree folders check wether the program
obeys C/C++ languages semantics seems fundamentally fl
On Feb 12, 2005, at 15:58, Richard Kenner wrote:
As several front-end people have suggested, calling fold whilst
constructing parse trees shouldn't be necessary (as shown by the
shining examples of g77 and GNAT).
I don't follow. GNAT certainly calls fold for every expression it
makes.
Jason,
Your patch has caused a lot of breakage for many platforms
and languages. It seems clear that it is far too intrusive
to apply in this stage.
Please revert your patch.
Thanks in advance,
-Geert
On Feb 18, 2005, at 12:14, Eric Botcazou wrote:
Regression went from 16 to 143:
http://gcc.gnu.o
On Apr 3, 2013, at 11:27, Simon Baldwin wrote:
> Suppose you had a month in which to reorganise gcc so that it builds
> its 3-stage bootstrap and runtime libraries in some massively parallel
> fashion, without hardware or resource constraints(*). How might you
> approach this?
One of the main pr
On Apr 3, 2013, at 23:44, Joern Rennecke wrote:
> How does that work?
> The binaries have to get the all the machines of the clusters somewhere.
> Does this assume you are using NFS or similar for your build directory?
> Won't the overhead of using that instead of local disk kill most of the
> p
On Apr 9, 2013, at 22:19, Segher Boessenkool wrote:
> Some numbers, 16-core 64-thread POWER7, c,c++,fortran bootstrap:
> -j6: real57m32.245s
> -j60: real38m18.583s
Yes, these confirm mine. It doesn't make sense to look at more
parallelization before we address the serial bottlenecks.
T
On May 3, 2013, at 00:15, reed kotler wrote:
> There was some confusion on the llvm list because some tests were run on
> targets that did not support the naked attribute.
>
> I think we are thinking now that the return statement should not be emitted
> unless explicitly requested.
>
> It's
On Oct 29, 2013, at 05:41, Richard Biener wrote:
> For reference those
> (http://clang.llvm.org/docs/LanguageExtensions.html) look like
>
> if (__builtin_umul_overflow(x, y, &result))
>return kErrorCodeHackers;
>
> which should be reasonably easy to support in GCC (if you factor out
> gen
On Nov 9, 2013, at 02:48, Ondřej Bílka wrote:
>> I've done the overflow checking in Gigi (Ada front end). Benchmarking
>> real world large Ada programs (where every integer operation is checked,
>> including array index computations etc.), I found the performance cost
>> *very* small (less than
On Feb 9, 2012, at 08:46, Andrew Haley wrote:
> n 02/09/2012 01:38 PM, Tim Prince wrote:
>> x87 built-ins should be a fair compromise between speed, code size, and
>> accuracy, for long double, on most CPUs. As Richard says, it's
>> certainly possible to do better in the context of SSE, but gcc
On Feb 9, 2012, at 10:28, Richard Guenther wrote:
> Yes, definitely! OTOH last time I added the toplevel libgcc-math directory
> and populated it with sources from glibc RMS objected violently and I had
> to remove it again. So we at least need to find a different source of
> math routines to st
On Feb 9, 2012, at 12:55, Joseph S. Myers wrote:
> No, that's not the case. Rather, the point would be that both GCC's
> library and glibc's end up being based on the new GNU project (which might
> take some code from glibc and some from elsewhere - and quite possibly
> write some from scratc
On Feb 9, 2012, at 15:33, Joseph S. Myers wrote:
> For a few, yes, inline support (such as already exists for some functions
> on some targets) makes sense. But for some more complicated cases it
> seems plausible that LTO information in a library might be an appropriate
> way of inlining whil
On Feb 10, 2012, at 05:07, Richard Guenther wrote:
> On Thu, Feb 9, 2012 at 8:16 PM, Geert Bosch wrote:
>> I don't agree having such a libm is the ultimate goal. It could be
>> a first step along the way, addressing correctness issues. This
>> would be great progres
> On 2012-02-09 12:36:01 -0500, Geert Bosch wrote:
>> I think it would make sense to have a check list of properties, and
>> use configure-based tests to categorize implementations. These tests
>> would be added as we go along.
>>
>> Criteria:
>>
>&g
On Feb 14, 2012, at 08:22, Vincent Lefevre wrote:
> Please do not use the term binary80, as it is confusing (and
> there is a difference between this format and the formats of
> the IEEE binary{k} class concerning the implicit bit).
Yes, I first wrote extended precision, though that really is
a ge
On Feb 14, 2012, at 11:44, Andrew Haley wrote:
> On 02/14/2012 04:41 PM, Geert Bosch wrote:
>> Right now we don't have a library either that conforms to C99
>
> Are you sure? As far as I know we do. We might not meet
> C99 Annex F, but that's not required.
>
&g
On Feb 13, 2012, at 09:59, Vincent Lefevre wrote:
> On 2012-02-09 15:49:37 +, Andrew Haley wrote:
>> I'd start with INRIA's crlibm.
>
> I point I'd like to correct. GNU MPFR has mainly (> 95%) been
> developed by researchers and engineers paid by INRIA. But this
> is not the case of CRlibm.
1 - 100 of 111 matches
Mail list logo