Re: Inclusion in an official release of a new throw-like qualifier
Jason Merrill wrote: Sergio Giro wrote: I perceived that many people think that the throw qualifiers, as described by the standard, are not useful Yes. But that's not a reason to add a slightly different non-standard feature that would require people already using standard exception specifications to rewrite everything. That's just a non-starter. This is also the feature I'd like to see: a static checker for the existing throw specification feature. I've been an advocate for eh specs, and I believe they are usable in their present form. For reasons pointed out elsewhere, Java-esque static checking really isn't going to work for C++. The only way that Java even gets away with it is why letting the vast majority of exceptions--those are derived from Error--slip through. That is, Java throw(something) is equivalent to C++ throw(something, Error), where Java's Error is probably close to C++'s std::runtime_error. (Unfortunately std::bad_alloc and similar don't derive from std::runtime_error or share a common parent.) The main problem with eh specs in their present form, besides poor support on some non-GCC compilers, is the lack of tools available to audit the specs. This is a challenge to implement, as it needs to use non-intrusive decorators and heuristics to allow the user to indicate cases where exceptions logically won't propagate, even though they 'theoretically could.' Complete static checking would end up being the equivalent of -Weffc++: completely useless. But I do think the compiler is the right place to do this sort of checking, as it needs to have available to it all of the same sorts of analysis as a compiler. Good luck, Sergio, if you want to work on this.
Re: RFC: GIMPLE tuples. Design and implementation proposal
Richard Henderson <[EMAIL PROTECTED]> writes: > On Tue, Apr 10, 2007 at 11:13:44AM -0700, Ian Lance Taylor wrote: > > The obvious way to make the proposed tuples position independent would > > be to use array offsets rather than pointers. > > I suggest instead, if we want something like this, that we make > the references be pc-relative. So something like If you go this way (and require special GC/debugger support) you could as well xor next/prev too and save another field. Adding a xor is basically free and much cheaper than any cache miss from larger data structures. The only thing that wouldn't work is that when you have a pointer to an arbitary element (without starting from beginning/end first) you couldn't get previous or next. -Andi
Re: RFC: GIMPLE tuples. Design and implementation proposal
On Tue, Apr 10, 2007 at 06:53:07PM -0700, Mark Mitchell wrote: > Self-relative, not PC-relative, right? Yes. I tend to use the terms interchangably, since if you're generating object files, the relocation is still named "pc-relative". r~
Re: RFC: GIMPLE tuples. Design and implementation proposal
If you go this way (and require special GC/debugger support) you could as well xor next/prev too and save another field. Adding a xor is basically free and much cheaper than any cache miss from larger data structures. The only thing that wouldn't work is that when you have a pointer to an arbitary element (without starting from beginning/end first) you couldn't get previous or next. You would pass around a BSI (basic-block/statement iterator) in this case. The BSI would hold the previous/next as necessary. Paolo
Re: Inclusion in an official release of a new throw-like qualifier
Brendon's point is a good one: It avoids having to define special attributes in the code, the only difference is the set of command line flags you pass to the compiler. It does however mean that you cant provide function level "enable/disable of static checking". I.e. It will check for all functions whose implementation you find in the given translation unit. Here we have a trade-off. Either a new attribute is added, which allows function level "enable/disable of static checking" (but this requires adding a new attribute) or a flag is added to the compiler, thus disallowing function level enabling. I think that finer control of enabledness is a must in order to avoid warnings from external headers... Maybe there is another way to switch between static/runtime checking (I have seen that pragma's have even worse reputation than attributes, so I do not even mention them :D ) Using grep -v to eliminate warnings (as indicated for Weffc++) sounds not very elegant to me... With respect to Aaron: This is a challenge to implement, as it needs to use non-intrusive decorators and heuristics to allow the user to indicate cases where exceptions logically won't propagate, even though they 'theoretically could.' Complete static checking would end up being the equivalent of -Weffc++: completely useless. OK, but let me say that I am trying to accomplish a much more humble task... At the beginning, the only information used will be the one provided by programmer in the prototypes. Maybe someone later can save the programer to write unuseful code. It seems to me that fully accomplishing what you are talking about needs an algorithm for the halting problem, and I plan to have such an algorithm not until 2009 :D . All the best, Sergio
Re: Constrain not satisfied - floating point insns.
Hi All, I had a single movsf insn that accepts all alternatives for the reload to work. (define_insn "movsf" [(set (match_operand:SF 0 "nonimmediate_operand" "=f,m,f,f,d,d") (match_operand:SF 1 "general_operand" "m,f,f,d,f,i")) ] But for the alternative 3, i need a another data register. so i used (define_insn "movsf" [(set (match_operand:SF 0 "nonimmediate_operand" "=f,m,f,f,d,d") (match_operand:SF 1 "general_operand" "m,f,f,d,f,i")) (clobber (match_scratch:SF 2 "=X,X,X,&d,X,X")) ] But i am getting: error: insn does not satisfy its constraints: Is there any other way to generate a temporary register other than using gen_reg_RTX in define expand and emitting the corresponding mov patterns? Regards, Rohit On 3/15/07, Jim Wilson <[EMAIL PROTECTED]> wrote: Rohit Arul Raj wrote: > (define_insn "movsf_store" > [(set (match_operand:SF 0 "memory_operand" "=m") > (match_operand:SF 1 "float_reg""f"))] You must have a single movsf define_insn that accepts all alternatives so that reload will work. You can't have separate define_insns for movsf and movsf_store. -- Jim Wilson, GNU Tools Support, http://www.specifix.com
Re: Constrain not satisfied - floating point insns.
"Rohit Arul Raj" <[EMAIL PROTECTED]> writes: > I had a single movsf insn that accepts all alternatives for the reload to > work. > > (define_insn "movsf" > [(set (match_operand:SF 0 "nonimmediate_operand" "=f,m,f,f,d,d") >(match_operand:SF 1 "general_operand" "m,f,f,d,f,i")) >] > But for the alternative 3, i need a another data register. > so i used > (define_insn "movsf" > [(set (match_operand:SF 0 "nonimmediate_operand" "=f,m,f,f,d,d") >(match_operand:SF 1 "general_operand" "m,f,f,d,f,i")) > (clobber (match_scratch:SF 2 "=X,X,X,&d,X,X")) ] > > But i am getting: error: insn does not satisfy its constraints: > > Is there any other way to generate a temporary register other than > using gen_reg_RTX in define expand and emitting the corresponding mov > patterns? For this sort of irritating case, I recommend studying the wonder that is secondary reloads. That is, don't handle the case in movsf at all. Use a secondary reload instead. Ian
RE: Constrain not satisfied - floating point insns.
On 11 April 2007 16:48, Rohit Arul Raj wrote: > Hi All, > > I had a single movsf insn that accepts all alternatives for the reload to > work. > > (define_insn "movsf" > [(set (match_operand:SF 0 "nonimmediate_operand" "=f,m,f,f,d,d") >(match_operand:SF 1 "general_operand" "m,f,f,d,f,i")) >] > But for the alternative 3, i need a another data register. > so i used > (define_insn "movsf" > [(set (match_operand:SF 0 "nonimmediate_operand" "=f,m,f,f,d,d") >(match_operand:SF 1 "general_operand" "m,f,f,d,f,i")) > (clobber (match_scratch:SF 2 "=X,X,X,&d,X,X")) ] > > But i am getting: error: insn does not satisfy its constraints: movsf is special. In particular, it gets called after reload, when no new pseudos can be allocated. So the (clobber (match_scratch)) fails. Re-read the internals doco for 'movMM'; it goes into some depth about this. Basically, movMM mustn't fail and can't allocate a temporary reg. > Is there any other way to generate a temporary register other than > using gen_reg_RTX in define expand and emitting the corresponding mov > patterns? Yes. This is what secondary reloads are for. Grep 'SECONDARY_RELOAD' in the internals doc. cheers, DaveK -- Can't think of a witty .sigline today
RE: RFC: GIMPLE tuples. Design and implementation proposal
On 11 April 2007 12:53, Andi Kleen wrote: > Richard Henderson <[EMAIL PROTECTED]> writes: > >> On Tue, Apr 10, 2007 at 11:13:44AM -0700, Ian Lance Taylor wrote: >>> The obvious way to make the proposed tuples position independent would >>> be to use array offsets rather than pointers. >> >> I suggest instead, if we want something like this, that we make >> the references be pc-relative. So something like > > If you go this way (and require special GC/debugger support) you > could as well xor next/prev too and save another field. > > Adding a xor is basically free and much cheaper than any cache miss > from larger data structures. Using a delta is even better than an XOR, because it remains constant when you relocate the data. > The only thing that wouldn't work is that when you have a pointer > to an arbitary element (without starting from beginning/end first) > you couldn't get previous or next. You need a pointer to two consecutive nodes to act as an iterator. However we already discussed the whole idea upthread and the general feeling was that it's a whole bunch of tricky code for a small saving, so not worth doing in the first iteration. Maybe as a future refinement. See the earlier posts in this thread for the discussion. cheers, DaveK -- Can't think of a witty .sigline today
Re: RFC: GIMPLE tuples. Design and implementation proposal
On Wed, Apr 11, 2007 at 04:55:02PM +0100, Dave Korn wrote: > On 11 April 2007 12:53, Andi Kleen wrote: > > > Richard Henderson <[EMAIL PROTECTED]> writes: > > > >> On Tue, Apr 10, 2007 at 11:13:44AM -0700, Ian Lance Taylor wrote: > >>> The obvious way to make the proposed tuples position independent would > >>> be to use array offsets rather than pointers. > >> > >> I suggest instead, if we want something like this, that we make > >> the references be pc-relative. So something like > > > > If you go this way (and require special GC/debugger support) you > > could as well xor next/prev too and save another field. > > > > Adding a xor is basically free and much cheaper than any cache miss > > from larger data structures. > > Using a delta is even better than an XOR, because it remains constant when > you relocate the data. You can xor deltas as well as pointers. The concepts are independent. Delta alone doesn't save anything on 32bit -- and on 64bit only when you accept < 2GB data (which might be reasonable for a basic block, but probably not for a full function). xor always halves the space needed for links. > > > The only thing that wouldn't work is that when you have a pointer > > to an arbitary element (without starting from beginning/end first) > > you couldn't get previous or next. > > You need a pointer to two consecutive nodes to act as an iterator. > > However we already discussed the whole idea upthread and the general feeling > was that it's a whole bunch of tricky code for a small saving, so not worth Is it tricky? You just need some abstraced iterators and a special case in the GC. And some gdb macros. I wrote a couple of xor list implementations in the past and didn't fid it particularly tricky, but given that was without any GC. > doing in the first iteration. Maybe as a future refinement. See the earlier > posts in this thread for the discussion. When you need to go over all the code anyways it would make sense to do such changes in one go. Or at least move to abstracted iterators early :- if you do that then changing the actual data structure later is relatively easy. Both xor list and deltas can be hidden with some care. -Andi
RE: RFC: GIMPLE tuples. Design and implementation proposal
On 11 April 2007 17:05, Andi Kleen wrote: >>> Adding a xor is basically free and much cheaper than any cache miss >>> from larger data structures. >> >> Using a delta is even better than an XOR, because it remains constant >> when you relocate the data. > > You can xor deltas as well as pointers. The concepts are independent. > > Delta alone doesn't save anything on 32bit -- and on 64bit only when you > accept < 2GB data (which might be reasonable for a basic block, but > probably not for a full function). > > > xor always halves the space needed for links. No, you've misunderstood. You can't "xor deltas" because there is only one of them. Go read the link that explains what a delta-linked list is. It's the *exact* same as what you're describing, only instead of storing the XOR of the next and prev pointers, you store the difference between them. You're changing the xor operation for a subtract, nothing else. This has the advantage that the delta doesn't need recalculation if you move the whole memory block en-masse to a new address, unlike an xor. >> However we already discussed the whole idea upthread and the general >> feeling was that it's a whole bunch of tricky code for a small saving, so >> not worth > > Is it tricky? You just need some abstraced iterators and a special case > in the GC. Yes, I'm aware of the elementary principles of software engineering, as is everyone else in the discussion. If you wouldn't mind *reading* the earlier discussion, rather than insisting on repeating it word for word... > I wrote a couple of xor list implementations in the past > and didn't fid it particularly tricky, but given that was without > any GC. I think the real point is that the tuples implementation is a big enough and risky enough and resource-consuming-enough change already to not want to get fancy about it. Iterative, incremental: we can change to delta pointers at a future time if we decide the memory savings outweigh any potential code obfuscation. > When you need to go over all the code anyways it would make > sense to do such changes in one go. Or at least move to abstracted > iterators early :- if you do that then changing the actual data structure > later is relatively easy. Both xor list and deltas can be hidden with some > care. I don't see you offering to actually do any of the *work*, just repeating an idea that's already been suggested and that the people who actually /are/ going to do the work have politely declined as being nonessential extra effort for little reward. cheers, DaveK -- Can't think of a witty .sigline today
Re: RFC: GIMPLE tuples. Design and implementation proposal
Dave Korn wrote: Using a delta is even better than an XOR, because it remains constant when you relocate the data. Could someone explain why we are re-inventing VAX instructions here ? [ OK, 1/2 :-) ] -- Toon Moene - e-mail: [EMAIL PROTECTED] - phone: +31 346 214290 Saturnushof 14, 3738 XG Maartensdijk, The Netherlands At home: http://moene.indiv.nluug.nl/~toon/ Who's working on GNU Fortran: http://gcc.gnu.org/ml/gcc/2007-01/msg00059.html
Recent dataflow branch SPEC2000 benchmarking
DF made a big progress especially with recent Ken Zadeck's DCE/DSE improvements. I think dataflow benchmarking will be interesting to people. Here is the comparison of dataflow-branch as of Apr 7. with the mainline on the last merge point (r123656) done by Daniel Berlin on Apr 7. Compilers for all platforms were configured with --enable-checking=release. All programs were compiled with -O2. x86_64 is 2.6Gzh Core2 machine ppc64 is 2.5Ghz G5 machine ia64 is 1.6Ghz Itanium2 machine SPECInt2000 compilation time (user time): machine mainline branch change - x86_64 141.7s150.2s +6.0% ppc64381.7s415.4s +8.8% ia64 446.7s496.3s +13.3% SPECFp2000 compilation time (user time): machine mainline branch change - x86_64 104.8s117.7s +12.3% ppc64312.3s367.8s +17.8% ia64 377.6s502.9s +33.2% SPECInt2000 score (without broken gcc for ppc64 and ia64): machine mainline branch change - x86_64 1935 1933 -0.1% ppc64 654653 -0.1% ia64 843840 -0.3% SPECFp2000 score (withouth broken sixtrack for all platforms plus applu for ia64 because it is broken for df-branch): machine mainline branch change - x86_64 1843 1838 -0.3% ppc64 661653 -1.2% ia64 569566 -0.5% SPECInt2000 average code size change (text segment): machine change - x86_64 +0.43% ppc64+0.75% ia64 +0.11% SPECFp2000 average code size change (text segment): machine change - x86_64 +0.12% ppc64+0.93% ia64 +0.09%
Re: Inclusion in an official release of a new throw-like qualifier
Aaron W. LaFramboise wrote: > Jason Merrill wrote: >> Sergio Giro wrote: >>> I perceived that many people think that the throw qualifiers, as >>> described by the standard, are not useful >> >> Yes. But that's not a reason to add a slightly different non-standard >> feature that would require people already using standard exception >> specifications to rewrite everything. That's just a non-starter. > > This is also the feature I'd like to see: a static checker for the > existing throw specification feature. > Thats what EDoc++ ( http://edoc.sourceforge.net/intro.html ) achieves among a few other things like generating doxygen documentation. However i agree with Sergio that having a feature that performs static analysis of exception propagation at compile time as part of GCC would be a helpful feature. The problem with EDoc++ as pointed out by Sergio is that it is not integrated into GCC and so maintainence is an issue as gcc evolves. > I've been an advocate for eh specs, and I believe they are usable in > their present form. For reasons pointed out elsewhere, Java-esque > static checking really isn't going to work for C++. The only way that > Java even gets away with it is why letting the vast majority of > exceptions--those are derived from Error--slip through. That is, Java > throw(something) is equivalent to C++ throw(something, Error), where > Java's Error is probably close to C++'s std::runtime_error. > (Unfortunately std::bad_alloc and similar don't derive from > std::runtime_error or share a common parent.) > EDoc++ provides the ability to "ignore" certain exceptions. The exception types to ignore can be specified in a "suppressions" file. Its the users decision as to whether they wish to do this or not. In my projects using EDoc++ i have opted for NOT including exception specifiers for all my functions but just using the generated documentation to determine what exceptions may be thrown. Otherwise as you said, it will become necessary to include a number of different std::... exceptions in the throw specifiers for completeness or to ignore the fact that they may occur. Which brings up the question is it allowable to let the program terminate if say a std::bad_alloc exception is thrown. I think the current manual audit process for using exceptions and knowing exactly what is going on is the biggest killer for using exceptions properly in C++. That was the initial purpose of creating this project. > The main problem with eh specs in their present form, besides poor > support on some non-GCC compilers, is the lack of tools available to > audit the specs. This is a challenge to implement, as it needs to use > non-intrusive decorators and heuristics to allow the user to indicate > cases where exceptions logically won't propagate, even though they > 'theoretically could.' Complete static checking would end up being the > equivalent of -Weffc++: completely useless. > Yes. I agree completely. My first version of EDoc++ did not include the concept of suppressions, and it just became infeasible to use. There was too much information and a lot of it just really was not important to the average developer. Practically every function that included a throw specifier would emit errors if it used anything from STL as for example std::bad_alloc would be thrown or something similar. By using an external suppressions file it is possible to indicate locations where exceptions will not logically propagate even though the compiler thinks that they will as well as saying that std::bad_alloc among others are "runtime" exceptions equivalent to Java and not necessary in the specs. Again there are issues involved with maintaining an external suppressions file as opposed to some form of markup within the source code. Suppressions can also be used to say i am only interested in information for a certain set of functions or to restrict the callgraph that may have been pessimistically expanded from virtual functions or function pointer calls, or to add function calls (which is currently necessary when using plugins) or really to modify the resulting data in any way imaginable. (The suppressions file is actually a python script that can manipulate the EDoc++ applications internal data structures) > But I do think the compiler is the right place to do this sort of > checking, as it needs to have available to it all of the same sorts of > analysis as a compiler. Good luck, Sergio, if you want to work on this. > Yes. Trying to construct accurate callgraphs is very difficult without the compiler. Especially with all the implicitly generated constructors and the like. Not to mention other issues that i have encountered like static/inline functions with the same name but DIFFERENT implementations in different translation units (Jee that is a poor coding style but it is possible using the preprocessor) and code compiled with -fno-exceptions linked with code that allows exceptions, same with C++ and C code inte
adding dependence from prefetch to load
Hi, I have a mips-like architecture which has prefetch instructions. I'm writing an optimization pass that inserts prefetch instructions for all array reads. The catch is that I'm trying to do this even if the reads are not in a loop. I have two questions: 1. Is there any work out there that has tried to do this before? All I found in the latest gcc-svn was tree-ssa-loop-prefetch.c, but since my references are not in a loop, a lot of the things done in there will not apply to me. 2. Right now I am inserting a __builting_prefetch(...) call immediately before the actual read, getting something like: D.1117_12 = &A[D.1101_14]; __builtin_prefetch (D.1117_12, 0, 1); D.1102_16 = A[D.1101_14]; However, if I enable the instruction scheduler pass, it doesn't realize there's a dependency between the prefetch and the load, and it actually moves the prefetch after the load, rendering it useless. How can I instruct the scheduler of this dependence? My thinking is to also specify a latency for prefetch, so that the scheduler will hopefully place the prefetch somewhere earlier in the code to partially hide this latency. Do you see anything wrong with this approach? The prefetch instruction in the .md file is defined as: (define_insn "prefetch" [(prefetch (match_operand:QI 0 "address_operand" "p") (match_operand 1 "const_int_operand" "n") (match_operand 2 "const_int_operand" "n"))] "" { operands[1] = mips_prefetch_cookie (operands[1], operands[2]); return "pref\t%1,%a0"; } [(set_attr "type" "prefetch")]) Thanks, George
Re: Inclusion in an official release of a new throw-like qualifier
Having not read the entire thread, I risk reiterating an idea that may have already been brought up, but I believe I've got a few thoughts that may be of value... and if somebody's already mentioned them, I hope they take this as a compliment and a vote in their favor. > Otherwise as > you said, it will become necessary to include a number of different > std::... exceptions in the throw specifiers for completeness or to > ignore the fact that they may occur. Which brings up the question is it > allowable to let the program terminate if say a std::bad_alloc exception > is thrown. > > I think the current manual audit process for using exceptions and > knowing exactly what is going on is the biggest killer for using > exceptions properly in C++. That was the initial purpose of creating > this project. I think it might be a good idea for the throw specifier to include all types derived from the specified types as acceptable exceptions, especially so for virtual methods. A implementation of a virtual method in a derived class may decide that the defacto-standard exception (that the base stated it would throw) does not contain enough information to fully describe the error. The derived method could then be free to throw an exception of a type derived from the exception class that the base stated it would throw without violating the throw specifiers that it inherits. Any user of the hierarchy in this example could still catch only the specified exception types and not be at risk of letting one slip by. Additionally, programmers using stl templates and letting their exceptions pass thru could merely claim to throw std::exception (being the parent of all stl exceptions... i hope) and not be considered incomplete. > Yes. I agree completely. My first version of EDoc++ did not include the > concept of suppressions, and it just became infeasible to use. There was > too much information and a lot of it just really was not important to > the average developer. > > Practically every function that included a throw specifier would emit > errors if it used anything from STL as for example std::bad_alloc would > be thrown or something similar. By using an external suppressions file > it is possible to indicate locations where exceptions will not logically > propagate even though the compiler thinks that they will as well as > saying that std::bad_alloc among others are "runtime" exceptions > equivalent to Java and not necessary in the specs. > > Again there are issues involved with maintaining an external > suppressions file as opposed to some form of markup within the source code. > > Suppressions can also be used to say i am only interested in information > for a certain set of functions or to restrict the callgraph that may > have been pessimistically expanded from virtual functions or function > pointer calls, or to add function calls (which is currently necessary > when using plugins) or really to modify the resulting data in any way > imaginable. (The suppressions file is actually a python script that can > manipulate the EDoc++ applications internal data structures) Two ideas come to mind: 1. Offer a class attribute that indicates an exception type that is exempt from throw specification checking (__volatile_exception). This could be applied to exception class like std::bad_alloc or other exceptions that just might fly from anywhere. 2. Offer a function attribute(s) to play the role of suppression specifiers (__suppresses and/or __catches... they do represent slightly different meanings). Both of these ideas introduce a level of danger that should be respected by developers, but so do const_cast and reinterpret_cast. > code compiled with > -fno-exceptions linked with code that allows exceptions, same with C++ > and C code intermixed, templates and vague linkage, differing throws() > specifiers for a functions prototype in different translation units, and > the list of complexities goes on... Anybody who does that deserves what they get (anybody who hires them, however, might not). Perhaps the resulting throw specification could be emitted as discardable records of some sort, and the linker could then be enhanced to rub the offending programmer's nose in it (well... at least the differing throws() scenario). Intermixed C & C++ is just a requirement of real life, and I haven't seen anything about function pointers yet... you just have to take the programmer's word on that one, right? I've recently been in a situation where I had to use reference-counted exception classes that could potentially be delivered to multiple receiving threads, and the IO completion thread that generated them had to bucket them with their respective IO buffers, but never get one thrown to it. Having such a feature would have saved me days of sifting thru the hulking beast of an IO subsystem that this design was deployed into. In other words, I would really like to have a feature like this available. - Br
GCC mini-summit at Google during Gelato conference
A reminder. This will happen next week. Several gcc developers are presenting at the Gelato conference in San Jose this April. Google is inviting them and all other interested parties to a gcc mini-summit at Google's Mountain View campus. The mini-summit will be on Wednesday, April 18, in Google building 40, from 10am to 5pm. The goal is simply to give gcc developers a place and time to talk face to face about where gcc should go and how it should get there. If you are interested in attending, please let me know. You don't have to commit to being here all day. Probably the first item of the day will be to set a schedule of things to discuss. If you have specific areas you would like to cover, please let me know that as well. I hope it will be both interesting and fun. So far I've heard from 28 people who are interested in attending. I will send out complete directions and instructions in a few days to everybody who said they were interested. The short version is that it will be at 1600 Amphitheatre Parkway in Mountain View, CA. Ian
Re: Recent dataflow branch SPEC2000 benchmarking
On 4/12/07, Vladimir Makarov <[EMAIL PROTECTED]> wrote: SPECFp2000 compilation time (user time): machine mainline branch change - x86_64 104.8s117.7s +12.3% ppc64312.3s367.8s +17.8% ia64 377.6s502.9s +33.2% Hi Vlad, Thanks for testing this. Do you also have per benchmark compilation times, perhaps? Gr. Steven