Re: GCC and Clang produce undefined references to functions with vague linkage
Quoting John McCall : On Jun 29, 2012, at 2:23 PM, Rafael Espíndola wrote: There's no "for a long time" here. The ABI does not allow us to emit these symbols with non-coalescing linkage. We're not going to break ABI just because people didn't consider a particular code pattern when they hacked in devirtualization through external v-tables. If we take "the ABI" to mean compatibility with released versions of the compilers, then it *is* broken, as released compilers assume behavior that is not guaranteed by the ABI (the document). It is not possible to avoid all incompatibilities. By "breaking the ABI", I mean changing the required output of compilers that conform to it. It is understood that compilers will occasionally have bugs that cause them to not conform; as a somewhat trivial example, both gcc and clang have mis-mangled symbols in the past. Typically, compiler implementers choose to fix those bugs, rather than trying to codify them by modifying the ABI. Again, the ABI clearly expects that every object file that references an inline function will define it, and do so with coalescing linkage. That is an invaluable invariant. Our proposed visibility-modifying optimization is hardly the only reasonable consumer of it. As with most compatibility problems, it would be better if no compiler had ever strayed from the One True Path, and yet it's happened. Given that the chance of actual incompatibility in practice is low — as I believe I've argued several times — I continue to believe that the proper response is to just fix the bug, rather than imposing a novel and permanent constraint on ourselves out of unmotivated worry. Yes, this indeed looks like (most probably my) bug in the constant folding code that now uses extern vtables. I will fix it. So we can not take comdat linkage decl from external vtable when we no longer have its body around, right? Honza
GCC 4.5 branch is closed now
The GCC 4.5.4 release has been tagged and is being created right now. The 4.5 branch is thus now closed. We have now two actively maintained releases as planned, 4.6.x and 4.7.x. Richard.
Re: [testsuite] don't use lto plugin if it doesn't work
On Jun 29, 2012, Mike Stump wrote: > First, let get to the heart of the matter. That is the behavior of > compiler. That's a distraction in the context of a patch to improve a feature that's already present in the testsuite machinery, isn't it? I have no objection to discussing this other topic you want to bring forth, but for reasons already stated and restated I don't think it precludes the proposed patch and the improvements to testsuite result reporting it brings about. >Do you think it works perfectly and needs no changing in this area I think the compiler is working just fine. It is told at build time whether or not to expect a linker with plugin support at run time, and behaves accordingly. Configure detects that based on linker version, which is in line with various other binutils feature tests we have in place, so I don't see that the test *needs* changing. If it were to change in such a way that, in a “native cross” build scenario, it failed to detect plugin support that is actually available on the equivalent linker one would find on the configured host=target run time platform, I'd be inclined to regard that as a regression and a bug. > My take was, the compiler should not try and use the plugin that won't work, > and that this should be a static config bit built up from the usual config > time methods for the host system. Do you agree, if not why, and what is your > take? I agree with that. Indeed, it seems like a restatement of what I just wrote above, unless one thinks configure should assume the user lied in the given triplets. Because, really, we *are* lying to configure when we say we're building a i686-linux-gnu native natively when the build system is actually a x86_64-linux-gnu with some wrapper scripts that approximate i686-linux-gnu. If we tell configure we're building a compiler for an i686-linux-gnu system, configure should listen to us, rather than second-guess us. And if we fail to provide it with an environment that is sufficiently close to what we asked for, it's entirely our own fault, rather than configure's fault for not realizing we were cheating and compensating for our lies. Now, of course, none of this goes against an ability to explicitly specify whether or not to build a plugin, or to expect it to work with the linker-for-target on host. But I don't think we should change the current default detection for the sake of the i686-native-on-x86_64 scenario, for it really is the best we can do in such a native-but-not-quite scenario, for we can't possibly test properties of the actual native linker if what's available at build time is some other linker. What we *can* and *should* do, IMHO, is to improve the test machinery, so that if we happen to test a toolchain built for i686 on a non-i686 system whose linker fails to load the i686 plugin, we don't waste time testing stuff the testsuite itself has already detected as nonfunctional, and we avoid the thousands of failures that would ensue. Another thing we may want to do documentat how to test GCC in such fake-native settings, so that people can refer to it and save duplicate effort and avoid inconsistent results. -- Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/ You must be the change you wish to see in the world. -- Gandhi Be Free! -- http://FSFLA.org/ FSF Latin America board member Free Software Evangelist Red Hat Brazil Compiler Engineer
Re: [testsuite] don't use lto plugin if it doesn't work
On Mon, Jul 2, 2012 at 1:06 PM, Alexandre Oliva wrote: > On Jun 29, 2012, Mike Stump wrote: > >> First, let get to the heart of the matter. That is the behavior of >> compiler. > > That's a distraction in the context of a patch to improve a feature > that's already present in the testsuite machinery, isn't it? I have no > objection to discussing this other topic you want to bring forth, but > for reasons already stated and restated I don't think it precludes the > proposed patch and the improvements to testsuite result reporting it > brings about. > >>Do you think it works perfectly and needs no changing in this area > > I think the compiler is working just fine. It is told at build time > whether or not to expect a linker with plugin support at run time, and > behaves accordingly. > > Configure detects that based on linker version, which is in line with > various other binutils feature tests we have in place, so I don't see > that the test *needs* changing. If it were to change in such a way > that, in a “native cross” build scenario, it failed to detect plugin > support that is actually available on the equivalent linker one would > find on the configured host=target run time platform, I'd be inclined to > regard that as a regression and a bug. > >> My take was, the compiler should not try and use the plugin that won't work, >> and that this should be a static config bit built up from the usual config >> time methods for the host system. Do you agree, if not why, and what is >> your take? > > I agree with that. Indeed, it seems like a restatement of what I just > wrote above, unless one thinks configure should assume the user lied in > the given triplets. Because, really, we *are* lying to configure when > we say we're building a i686-linux-gnu native natively when the build > system is actually a x86_64-linux-gnu with some wrapper scripts that > approximate i686-linux-gnu. If we tell configure we're building a > compiler for an i686-linux-gnu system, configure should listen to us, > rather than second-guess us. And if we fail to provide it with an > environment that is sufficiently close to what we asked for, it's > entirely our own fault, rather than configure's fault for not realizing > we were cheating and compensating for our lies. > > Now, of course, none of this goes against an ability to explicitly > specify whether or not to build a plugin, or to expect it to work with > the linker-for-target on host. But I don't think we should change the > current default detection for the sake of the i686-native-on-x86_64 > scenario, for it really is the best we can do in such a > native-but-not-quite scenario, for we can't possibly test properties of > the actual native linker if what's available at build time is some other > linker. > > What we *can* and *should* do, IMHO, is to improve the test machinery, > so that if we happen to test a toolchain built for i686 on a non-i686 > system whose linker fails to load the i686 plugin, we don't waste time > testing stuff the testsuite itself has already detected as > nonfunctional, and we avoid the thousands of failures that would ensue. If you consider what happens if we break the lto-plugin so that it fails loading or crashes the linker, then it's clear that we _do_ want to see this effect in the testsuite as FAIL. Merely making tests UNSUPPORTED in this case is not what we want ... > Another thing we may want to do documentat how to test GCC in such > fake-native settings, so that people can refer to it and save duplicate > effort and avoid inconsistent results. ... which means that maybe we should not encourage such settings at all. Richard. > -- > Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/ > You must be the change you wish to see in the world. -- Gandhi > Be Free! -- http://FSFLA.org/ FSF Latin America board member > Free Software Evangelist Red Hat Brazil Compiler Engineer
Re: Allow use of ranges in copyright notices
On Jun 30, 2012, David Edelsohn wrote: > IBM's policy specifies a comma: > , > and not a dash range. But this notation already means something else in our source tree. -- Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/ You must be the change you wish to see in the world. -- Gandhi Be Free! -- http://FSFLA.org/ FSF Latin America board member Free Software Evangelist Red Hat Brazil Compiler Engineer
Re: Allow use of ranges in copyright notices
On 7/2/2012 8:35 AM, Alexandre Oliva wrote: On Jun 30, 2012, David Edelsohn wrote: IBM's policy specifies a comma: , and not a dash range. But this notation already means something else in our source tree. I think using the dash is preferable, and is a VERY widely used notation, used by all major software companies I deal with!
Re: Allow use of ranges in copyright notices
On Mon, 2 Jul 2012, Robert Dewar wrote: > On 7/2/2012 8:35 AM, Alexandre Oliva wrote: > > On Jun 30, 2012, David Edelsohn wrote: > > > > > IBM's policy specifies a comma: > > > > > , > > > > > and not a dash range. > > > > But this notation already means something else in our source tree. > > I think using the dash is preferable, and is a VERY widely used > notation, used by all major software companies I deal with! And as a GNU project there isn't a choice between using IBM convention and GNU convention - only about which of the GNU options we use. The simplest is -2012 (for any value of 1987 or later) and so I am proposing we move to that (make this change to README to allow it, allow converting files when 2012 is added to the copyright years, as is now done in glibc, allow a bulk conversion if anyone wishes to do one). -- Joseph S. Myers jos...@codesourcery.com
Re: Allow use of ranges in copyright notices
On Mon, Jul 2, 2012 at 10:17 AM, Joseph S. Myers wrote: > On Mon, 2 Jul 2012, Robert Dewar wrote: > >> On 7/2/2012 8:35 AM, Alexandre Oliva wrote: >> > On Jun 30, 2012, David Edelsohn wrote: >> > >> > > IBM's policy specifies a comma: >> > >> > > , >> > >> > > and not a dash range. >> > >> > But this notation already means something else in our source tree. >> >> I think using the dash is preferable, and is a VERY widely used >> notation, used by all major software companies I deal with! > > And as a GNU project there isn't a choice between using IBM convention and > GNU convention - only about which of the GNU options we use. The simplest > is -2012 (for any value of 1987 or later) and so > I am proposing we move to that (make this change to README to allow it, > allow converting files when 2012 is added to the copyright years, as is > now done in glibc, allow a bulk conversion if anyone wishes to do one). Joseph, You are misunderstanding the point of my message. I mentioned the comma convention for worldwide legal precedence and acceptance, not because it is an IBM convention. There was a similar discussion many years ago. The dash format is widely used, but the comma format has better legal clarity and definition in worldwide copyright litigation, at least many years ago. - David
Re: Allow use of ranges in copyright notices
On Mon, 2 Jul 2012, David Edelsohn wrote: > There was a similar discussion many years ago. The dash format is > widely used, but the comma format has better legal clarity and > definition in worldwide copyright litigation, at least many years ago. Maybe questions about the meanings of the dash format are why the GNU instructions require a statement in a README file about the use of that notation. -- Joseph S. Myers jos...@codesourcery.com
GCC 4.5.4 Released
The GNU Compiler Collection version 4.5.4 has been released. GCC 4.5.4 is the last bug-fix release containing important fixes for regressions and serious bugs in GCC 4.5.3. This release is available from the FTP servers listed at: http://www.gnu.org/order/ftp.html Please do not contact me directly regarding questions or comments about this release. Instead, use the resources available from http://gcc.gnu.org. As always, a vast number of people contributed to this GCC release -- far too many to thank them individually!
Re: GCC and Clang produce undefined references to functions with vague linkage
On Jun 28, 2012, Rafael Espíndola wrote: > Unfortunately, this found a bug in both gcc and clang (or in the > itanium ABI, it is not very clear). The testcase is not well-formed C++, for it violates the one-definition rule in that it *lacks* a definition for the virtual member function foo::~foo(). Does it make any difference if you add a definition? -- Alexandre Oliva, freedom fighterhttp://FSFLA.org/~lxoliva/ You must be the change you wish to see in the world. -- Gandhi Be Free! -- http://FSFLA.org/ FSF Latin America board member Free Software Evangelist Red Hat Brazil Compiler Engineer
Re: GCC and Clang produce undefined references to functions with vague linkage
> not well-formed C++, for it violates the one-definition rule in that it > *lacks* a definition for the virtual member function foo::~foo(). Does > it make any difference if you add a definition? Unfortunately no. Replacing the declaration with an inline definition produces a copy of it in undef.o, but we still get an undefined reference to ~bar: nm undef.o | grep D0 U _ZN3barD0Ev W _ZN3fooD0Ev Cheers, Rafael
Re: GCC and Clang produce undefined references to functions with vague linkage
> Yes, this indeed looks like (most probably my) bug in the constant folding > code that now uses extern vtables. I will fix it. So we can not take > comdat linkage decl from external vtable when we no longer have its body > around, right? Sounds about the fix John was describing, yes. You can produce a copy of the comdat linkage decl (the destructor in this case) or avoid using the extern vtable and just produce an undefined reference to the vtable itself. > Honza Thanks! Rafael
Re: libstdc++ c++98 & c++11 ABI incompatibility
On Thu, 2012-06-14 at 15:14 +0200, Matthias Klose wrote: > While PR53646 claims that c++98 and c++11 should be ABI > compatible (modulo bugs), the addition of the _M_size member > to std::_List_base::_List_impl makes libraries using > std::list in headers incompatible This is pretty nasty for LibreOffice (and no doubt others). We can, and often do depend on rather a number of system C++ libraries and at a very minimum, having no simple way to detect which C++ ABI we have to build against 'old' vs. 'new' - is profoundly unpleasant. Is there no chance of having a bug fix that is a revision of the (unintended?) ABI breakage in this compiler series ? > And is there a way to tell which mode a shared object/an > executable was built for, when just looking at the stripped > or unstripped object? I guess here we have a compile-time checking problem; we would need some more or less gross configure hack to try to detect which ABI is deployed; suggestions appreciated. Many thanks for the (otherwise) excellent gcc :-) ATB, Michael. -- michael.me...@suse.com <><, Pseudo Engineer, itinerant idiot
Re: libstdc++ c++98 & c++11 ABI incompatibility
On 07/02/2012 10:26 AM, Michael Meeks wrote: On Thu, 2012-06-14 at 15:14 +0200, Matthias Klose wrote: While PR53646 claims that c++98 and c++11 should be ABI compatible (modulo bugs), the addition of the _M_size member to std::_List_base::_List_impl makes libraries using std::list in headers incompatible This is pretty nasty for LibreOffice (and no doubt others). We can, and often do depend on rather a number of system C++ libraries and at a very minimum, having no simple way to detect which C++ ABI we have to build against 'old' vs. 'new' - is profoundly unpleasant. Is there no chance of having a bug fix that is a revision of the (unintended?) ABI breakage in this compiler series ? That's the direction I'd prefer to see (reversion until we're ready to make the wholesale ABI changes). Not sure what the libstdc++ maintainers are thinking right now. Jeff
Re: libstdc++ c++98 & c++11 ABI incompatibility
On 2 July 2012 17:43, Jeff Law wrote: > On 07/02/2012 10:26 AM, Michael Meeks wrote: >> >> >> On Thu, 2012-06-14 at 15:14 +0200, Matthias Klose wrote: >>> >>> While PR53646 claims that c++98 and c++11 should be ABI >>> compatible (modulo bugs), the addition of the _M_size member >>> to std::_List_base::_List_impl makes libraries using >>> std::list in headers incompatible >> >> >> This is pretty nasty for LibreOffice (and no doubt others). We >> can, and >> often do depend on rather a number of system C++ libraries and at a very >> minimum, having no simple way to detect which C++ ABI we have to build >> against 'old' vs. 'new' - is profoundly unpleasant. >> >> Is there no chance of having a bug fix that is a revision of the >> (unintended?) ABI breakage in this compiler series ? > > That's the direction I'd prefer to see (reversion until we're ready to make > the wholesale ABI changes). Not sure what the libstdc++ maintainers are > thinking right now. I'm wondering why the libstdc++ list was taken out of the CC list ;-) I don't know what the others think but rather than just reverting it I'd like to see inline namespaces used so that in C++11 mode std::list refers to (for example) std::__2011::list, which has the additional member. That wouldn't link to C++03's std::list.
Re: libstdc++ c++98 & c++11 ABI incompatibility
On Mon, Jul 2, 2012 at 7:00 PM, Jonathan Wakely wrote: > On 2 July 2012 17:43, Jeff Law wrote: >> On 07/02/2012 10:26 AM, Michael Meeks wrote: >>> >>> >>> On Thu, 2012-06-14 at 15:14 +0200, Matthias Klose wrote: While PR53646 claims that c++98 and c++11 should be ABI compatible (modulo bugs), the addition of the _M_size member to std::_List_base::_List_impl makes libraries using std::list in headers incompatible >>> >>> >>> This is pretty nasty for LibreOffice (and no doubt others). We >>> can, and >>> often do depend on rather a number of system C++ libraries and at a very >>> minimum, having no simple way to detect which C++ ABI we have to build >>> against 'old' vs. 'new' - is profoundly unpleasant. >>> >>> Is there no chance of having a bug fix that is a revision of the >>> (unintended?) ABI breakage in this compiler series ? >> >> That's the direction I'd prefer to see (reversion until we're ready to make >> the wholesale ABI changes). Not sure what the libstdc++ maintainers are >> thinking right now. > > I'm wondering why the libstdc++ list was taken out of the CC list ;-) > > I don't know what the others think but rather than just reverting it >From a RM point of view please go ahead and revert unintended ABI breakage on all affected branches. Add an entry to the respective changes file to warn users about the incompatibility present on branches. > I'd like to see inline namespaces used so that in C++11 mode std::list > refers to (for example) std::__2011::list, which has the additional > member. That wouldn't link to C++03's std::list. That means that C++03 std::list cannot interface with C++11 std::list even within the v6 ABI, right? That sounds not very much better than the broken ABI we have right now (unless you suggest people that want the C++11 std::list would have to use std::__2011::list and otherwise would get the C++03 std::list even with -std=c++11?). Richard.
Re: libstdc++ c++98 & c++11 ABI incompatibility
On 2 July 2012 18:24, Richard Guenther wrote: > On Mon, Jul 2, 2012 at 7:00 PM, Jonathan Wakely wrote: >> I'd like to see inline namespaces used so that in C++11 mode std::list >> refers to (for example) std::__2011::list, which has the additional >> member. That wouldn't link to C++03's std::list. > > That means that C++03 std::list cannot interface with C++11 std::list > even within the v6 ABI, right? Right. > That sounds not very much better > than the broken ABI we have right now (unless you suggest people > that want the C++11 std::list would have to use std::__2011::list and > otherwise would get the C++03 std::list even with -std=c++11?). No, I mean that with -std=c++11 there would be no 'list' declared in namespace std. The name 'std::list' would refer to the 'list' in the inline namespace 'std::__2011' which would be mangled differently.
Re: libstdc++ c++98 & c++11 ABI incompatibility
Hi, On 07/02/2012 07:24 PM, Richard Guenther wrote: On Mon, Jul 2, 2012 at 7:00 PM, Jonathan Wakely wrote: On 2 July 2012 17:43, Jeff Law wrote: On 07/02/2012 10:26 AM, Michael Meeks wrote: On Thu, 2012-06-14 at 15:14 +0200, Matthias Klose wrote: While PR53646 claims that c++98 and c++11 should be ABI compatible (modulo bugs), the addition of the _M_size member to std::_List_base::_List_impl makes libraries using std::list in headers incompatible This is pretty nasty for LibreOffice (and no doubt others). We can, and often do depend on rather a number of system C++ libraries and at a very minimum, having no simple way to detect which C++ ABI we have to build against 'old' vs. 'new' - is profoundly unpleasant. Is there no chance of having a bug fix that is a revision of the (unintended?) ABI breakage in this compiler series ? That's the direction I'd prefer to see (reversion until we're ready to make the wholesale ABI changes). Not sure what the libstdc++ maintainers are thinking right now. I'm wondering why the libstdc++ list was taken out of the CC list ;-) I don't know what the others think but rather than just reverting it From a RM point of view please go ahead and revert unintended ABI breakage on all affected branches. Add an entry to the respective changes file to warn users about the incompatibility present on branches. I cannot say to have followed all the details of this discussion (neither to fully agree with quite a few statements I read in it ;) but since I added the _M_size member in 4_7-branch (to fix 49561 and of course in order to provide C++11 conforming complexities for the various operations) I'm simply going to revert the change from branch and mainline. Consider it done. Then, it would be great if Jon could devise something more sophisticated, not throwing away the baby, so to speak ;) I also want to mention (I don't think somebody did already in the thread) that at some point, with Jason too, we discussed the idea of adding to each binary a marker about the ABI version which then would be used by the linker to warn or error out. This vague idea goes of course well beyond our specific needs for the C++98 conforming std::list vs the C++11 conforming version of it. Thanks, Paolo.
Re: [testsuite] don't use lto plugin if it doesn't work
On Jul 2, 2012, at 4:06 AM, Alexandre Oliva wrote: > On Jun 29, 2012, Mike Stump wrote: >> First, let get to the heart of the matter. That is the behavior of >> compiler. > > That's a distraction in the context of a patch to improve a feature > that's already present in the testsuite machinery, isn't it? Without having a compiler to test, there is little reason to have a testsuite. The behavior of the compiler is what drives the testsuite?! The testsuite doesn't exist to test other than the behavior of the compiler we would like to see. By defining the behavior of the compiler we'd like to see, we define exactly what we want to test. If we can't agree on the compiler, then by definition, we can't agree on the testsuite. So, first things first, we have to resolve the features in the compiler, so that we know in what direction we are traveling. If you disagree, you'd have to enlighten me what purpose the testsuite serves. I'm happy to listen. > I have no > objection to discussing this other topic you want to bring forth, but > for reasons already stated and restated I don't think it precludes the > proposed patch and the improvements to testsuite result reporting it > brings about. If the testsuite can paper over configuration bits that are wrong, and re-adjust the compiler, but, the compiler can't, then you wind up with testsuite results that don't reflect the expected runtime behavior of the compiler. The testsuite tries to reproduce the environment that the compiler will see when the user uses it, so as to as faithfully as it can test the compiler, as the user will see it. It is an obligation for the user to provide the environment the compiler is to be used in to the compiler during build and test. >> Do you think it works perfectly and needs no changing in this area > > I think the compiler is working just fine. Ah, then I'd leave you and Jakub to sort out if the linker should use the plugin by default when the plugin just isn't going to work... I think he preferred to fix the compiler to not enable the plugin by default in this case. This why agreement on the behavior of the compiler is critical. > It is told at build time > whether or not to expect a linker with plugin support at run time, and > behaves accordingly. > > Configure detects that based on linker version, which is in line with > various other binutils feature tests we have in place, so I don't see > that the test *needs* changing. Validating the the linker 64-bitness matches the plugin 64-bitness in addition to validating the version number. > If it were to change in such a way > that, in a “native cross” build scenario, it failed to detect plugin > support that is actually available on the equivalent linker one would > find on the configured host=target run time platform, I'd be inclined to > regard that as a regression and a bug. If I understand what you just said, I disagree. The environment provided to configure, is the final environment, it is what is actually available, it is the one from which to make all decisions. If the user doesn't like that, then the user is free to more faithfully provide the environment. >> My take was, the compiler should not try and use the plugin that won't work, >> and that this should be a static config bit built up from the usual config >> time methods for the host system. Do you agree, if not why, and what is >> your take? > > I agree with that. Odd. So, does the compiler enable by default the plug-in when it can know that the plug-in isn't going to work? I thought the entire point was that the compiler was using the plug-in and that plug-in wasn't working? > Indeed, it seems like a restatement of what I just > wrote above, unless one thinks configure should assume the user lied in > the given triplets. Because, really, we *are* lying to configure when > we say we're building a i686-linux-gnu native natively when the build > system is actually a x86_64-linux-gnu with some wrapper scripts that > approximate i686-linux-gnu. If we tell configure we're building a > compiler for an i686-linux-gnu system, configure should listen to us, > rather than second-guess us. And if we fail to provide it with an > environment that is sufficiently close to what we asked for, it's > entirely our own fault, rather than configure's fault for not realizing > we were cheating and compensating for our lies. Jakub said disable by default, and I'm just going along with that as a given. Since I agree with his position as well, I'd find it hard to argue against it. If you want other than that, you'd have find support for that position. gcc builds for the environment provided. Sorry if you don't agree with that. The reason why we do this is, gcc takes as gospel what you say. If you say you have a 64-bit linker, then, you have a 64-bit linker. If you say you have a 32-bit linker, then you have a 32-bit linker. When you say you have a 64-bit linker, it i
Re: libstdc++ c++98 & c++11 ABI incompatibility
On 07/02/2012 11:53 AM, Paolo Carlini wrote: I also want to mention (I don't think somebody did already in the thread) that at some point, with Jason too, we discussed the idea of adding to each binary a marker about the ABI version which then would be used by the linker to warn or error out. This vague idea goes of course well beyond our specific needs for the C++98 conforming std::list vs the C++11 conforming version of it. Yup. I've floated this idea as well. Some kind of note section with abi information in it plus some linker magic to detect and error/warn when mixing/matching bits with different abis. Jeff
Malicious content being served
My attempt to access the www.netgull.com mirror was blocked by our web content filter today. They tell me that this site may be serving malicious content: > I did some research on www.netgull.com to see if I could get the filtering > vendor to change the categorization of this website. During the course of my > research I found that the Netgull site is most likely compromised and serving > malicious content. I am surprised it is not blocked by the Malware Domain > List. If you review the comments of this website on the Web of Trust's > website you will note that several people have complained that this site not > only served them content containing viruses but also attempts to mine data > through Phishing scams. I would urge you to consider another source if you > are attempting to download content because Netgull's site will most likely > end up serving you malware. I thought you should know. Larry Baker US Geological Survey 650-329-5608 ba...@usgs.gov