[cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread Schlottke-Lakemper, Michael via cfe-users
Hi folks,

I'm trying to set up our cluster tool chain to support clang’s memory sanitizer 
for our multiphysics simulation program, but I can’t get it to work :-/

I started with a regularly compiled clang installation (with libcxx, libcxxabi, 
and libomp built in-tree). With this, I compiled all necessary third-party 
libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW, Parallel netCDF). Then, 
I compiled the libcxx/libcxxabi libraries with msan-support by checking out the 
llvm source and the libcxx/libcxxabi repos into the llvm/projects/ directory. I 
configured them with LLVM_USE_SANITIZER=Memory and put the msan-instrumented 
libraries in the LD_LIBRARY_PATH.

Finally, I tried to compile our tool, ZFS, with the memory sanitizer enabled 
and linked against the msan-compiled third-party libraries as well as the 
msan-instrumented libcxx/libcxxabi libraries (by putting them in the 
LD_LIBRARY_PATH). However, here I failed: either at configure time or at 
compile time (after doing some LD_LIBRARY_PATH trickery), clang exits with the 
following error:

/pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error: 
/pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined symbol: 
__msan_va_arg_overflow_size_tls

Any suggestions as to what I am doing wrong? Should I put the msan-instrumented 
libcxx in the LD_LIBRARY_PATH after compilation only?

Thanks a lot in advance

Michael


--
Michael Schlottke-Lakemper

Chair of Fluid Mechanics and Institute of Aerodynamics
RWTH Aachen University
Wüllnerstraße 5a
52062 Aachen
Germany

Phone: +49 (241) 80 95188
Fax: +49 (241) 80 92257
Mail: 
m.schlottke-lakem...@aia.rwth-aachen.de
Web: http://www.aia.rwth-aachen.de

___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


[cfe-users] Segmentation fault with memory sanitizer and OpenMPI's mpirun

2015-11-23 Thread Schlottke-Lakemper, Michael via cfe-users
Hi folks,

When running “mpirun” of an msan-instrumented installation of OpenMPI, I get 
the following error:

$> mpirun -n 1 hostname
[aia308:48324] *** Process received signal ***
[aia308:48324] Signal: Segmentation fault (11)
[aia308:48324] Signal code: Address not mapped (1)
[aia308:48324] Failing at address: (nil)
[aia308:48324] [ 0] 
/pds/opt/openmpi-1.8.7-clang-msan/lib64/libopen-pal.so.6(+0x123ca1)[0x7f9e21c90ca1]
[aia308:48324] [ 1] mpirun[0x42b602]
[aia308:48324] [ 2] /lib64/libpthread.so.0(+0xf890)[0x7f9e21250890]
[aia308:48324] *** End of error message ***
Segmentation fault

Running it through gdb and printing the stacktrace, I get the following 
additional information:
$> gdb -ex r --args mpirun -n 1 hostname
#0  0x in ?? ()
#1  0x0042bd5b in __interceptor_openpty () at 
/pds/opt/install/llvm/llvm-20151121-r253770-src/projects/compiler-rt/lib/msan/msan_interceptors.cc:1355
#2  0x77705a7c in opal_openpty (amaster=0x7fffacf8, 
aslave=0x7fffacfc, name=0x0, termp=0x0, winp=0x0) at 
../../../openmpi-1.8.7/opal/util/opal_pty.c:116
#3  0x77b31e9a in orte_iof_base_setup_prefork (opts=0x7ffface8) at 
../../../../openmpi-1.8.7/orte/mca/iof/base/iof_base_setup.c:89
#4  0x7fffefea465a in odls_default_fork_local_proc (context=0x7240bb80, 
child=0x7240b880, environ_copy=0x7341ec00, jobdat=0x72209d80) at 
../../../../../openmpi-1.8.7/orte/mca/odls/default/odls_default_module.c:860
#5  0x77b3cfb8 in orte_odls_base_default_launch_local (fd=-1, sd=4, 
cbdata=0x7062da80) at 
../../../../openmpi-1.8.7/orte/mca/odls/base/odls_base_default_fns.c:1544
#6  0x777459d7 in event_process_active_single_queue 
(base=0x72a0fc80, activeq=0x7100bdc0) at 
../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1367
#7  0x7773bb92 in event_process_active (base=0x72a0fc80) at 
../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1437
#8  0x77738fd7 in opal_libevent2021_event_base_loop 
(base=0x72a0fc80, flags=1) at 
../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1647
#9  0x0048f651 in orterun (argc=4, argv=0x7fffcea8) at 
../../../../openmpi-1.8.7/orte/tools/orterun/orterun.c:1133
#10 0x0048b20c in main (argc=4, argv=0x7fffcea8) at 
../../../../openmpi-1.8.7/orte/tools/orterun/main.c:13

I suspect that has something to do with using some non-msan-instrumented system 
libraries, but how can I find out which library is the problem and what to do 
to fix it? Any ideas?

Regards,

Michael

P.S.: I compiled OpenMPI with the following configure command (the wildcard 
blacklist was necessary because OpenMPI just has too many issues with msan…):

printf "fun:*\n" > blacklist.txt && \
CC=clang CXX=clang++ \
CFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
-fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
CXXFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
-fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
../openmpi-1.8.7/configure --prefix=/pds/opt/openmpi-1.8.7-clang-msan 
--disable-mpi-fortran
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


[cfe-users] UBSan: very long compile times for clang++ with -fsanitize=undefined, unsigned-integer-overflow

2015-11-23 Thread Schlottke-Lakemper, Michael via cfe-users
Hi folks,

When compiling our simulation tool with the undefined behavior sanitizer, I 
noticed that some files take much longer to compile - I am talking about a 
5x-250x increase:

file1 (zfsdgblock_inst_3d_acousticperturb.cpp): 8.5s vs 41.3s (4.8x)
file2 (zfsfvparticle.cpp): 9.5s vs 114.8s (12.1x)
file3 (zfscartesiangrid_inst_fv.cpp): 18.4s vs 100.8s (5.4x)
file4 (zfsfvbndrycnd3d.cpp): 19.6s vs [stopped after 1.5 hours of compilation]

For reference, I’ve added the full compilation command I used below. In the 
UBSan invocation, the only change is the addition of 
“-fsanitize=undefined,unsigned-integer-overflow”.

Even though the files in question are relatively large (5k-20k lines), I am 
wondering if this is expected. Is there anything I can do to significantly 
speed up compilation while still getting reasonable runtime performance? I 
found out that e.g. using -O0 instead of -O3 significantly reduces the compile 
time for file4 with ubsan to 17.1s, while -O1 already takes > 15 mins.

Regards,

Michael

P.S.: 

Here is the compilation command I used for file2 (the other files were compiled 
using the exact same flags):

/pds/opt/llvm/bin/clang++ -O3 -DNDEBUG -fvectorize -fslp-vectorize 
-DCOMPILER_ATTRIBUTES -DUSE_RESTRICT -march=native -mtune=native -std=c++11 
-stdlib=libc++ -Wall -Wextra -pedantic -Wshadow -Wfloat-equal -Wcast-align 
-Wfloat-equal -Wdisabled-optimization -Wformat=2 -Winvalid-pch -Winit-self 
-Wmissing-include-dirs -Wredundant-decls -Wpacked -Wpointer-arith 
-Wstack-protector -Wswitch-default -Wwrite-strings -Wno-type-safety -Werror 
-Wunused -Wno-infinite-recursion -Isrc -isystem /pds/opt/fftw/include -isystem 
/pds/opt/parallel-netcdf/include -isystem /pds/opt/openmpi/include -MMD -MT 
src/CMakeFiles/zfs.dir/zfsfvparticle.cpp.o -MF 
src/CMakeFiles/zfs.dir/zfsfvparticle.cpp.o.d -o 
src/CMakeFiles/zfs.dir/zfsfvparticle.cpp.o -c src/zfsfvparticle.cpp

"-DCOMPILER_ATTRIBUTES -DUSE_RESTRICT” just enables compiler attributes and the 
use of __restrict in the preprocessor stage.
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Segmentation fault with memory sanitizer and OpenMPI's mpirun

2015-11-23 Thread Evgenii Stepanov via cfe-users
This is caused by missing -lutil.
FTR, http://llvm.org/viewvc/llvm-project?rev=245619&view=rev

On Mon, Nov 23, 2015 at 7:33 AM, Schlottke-Lakemper, Michael via
cfe-users  wrote:
> Hi folks,
>
> When running “mpirun” of an msan-instrumented installation of OpenMPI, I get 
> the following error:
>
> $> mpirun -n 1 hostname
> [aia308:48324] *** Process received signal ***
> [aia308:48324] Signal: Segmentation fault (11)
> [aia308:48324] Signal code: Address not mapped (1)
> [aia308:48324] Failing at address: (nil)
> [aia308:48324] [ 0] 
> /pds/opt/openmpi-1.8.7-clang-msan/lib64/libopen-pal.so.6(+0x123ca1)[0x7f9e21c90ca1]
> [aia308:48324] [ 1] mpirun[0x42b602]
> [aia308:48324] [ 2] /lib64/libpthread.so.0(+0xf890)[0x7f9e21250890]
> [aia308:48324] *** End of error message ***
> Segmentation fault
>
> Running it through gdb and printing the stacktrace, I get the following 
> additional information:
> $> gdb -ex r --args mpirun -n 1 hostname
> #0  0x in ?? ()
> #1  0x0042bd5b in __interceptor_openpty () at 
> /pds/opt/install/llvm/llvm-20151121-r253770-src/projects/compiler-rt/lib/msan/msan_interceptors.cc:1355
> #2  0x77705a7c in opal_openpty (amaster=0x7fffacf8, 
> aslave=0x7fffacfc, name=0x0, termp=0x0, winp=0x0) at 
> ../../../openmpi-1.8.7/opal/util/opal_pty.c:116
> #3  0x77b31e9a in orte_iof_base_setup_prefork (opts=0x7ffface8) 
> at ../../../../openmpi-1.8.7/orte/mca/iof/base/iof_base_setup.c:89
> #4  0x7fffefea465a in odls_default_fork_local_proc 
> (context=0x7240bb80, child=0x7240b880, environ_copy=0x7341ec00, 
> jobdat=0x72209d80) at 
> ../../../../../openmpi-1.8.7/orte/mca/odls/default/odls_default_module.c:860
> #5  0x77b3cfb8 in orte_odls_base_default_launch_local (fd=-1, sd=4, 
> cbdata=0x7062da80) at 
> ../../../../openmpi-1.8.7/orte/mca/odls/base/odls_base_default_fns.c:1544
> #6  0x777459d7 in event_process_active_single_queue 
> (base=0x72a0fc80, activeq=0x7100bdc0) at 
> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1367
> #7  0x7773bb92 in event_process_active (base=0x72a0fc80) at 
> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1437
> #8  0x77738fd7 in opal_libevent2021_event_base_loop 
> (base=0x72a0fc80, flags=1) at 
> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1647
> #9  0x0048f651 in orterun (argc=4, argv=0x7fffcea8) at 
> ../../../../openmpi-1.8.7/orte/tools/orterun/orterun.c:1133
> #10 0x0048b20c in main (argc=4, argv=0x7fffcea8) at 
> ../../../../openmpi-1.8.7/orte/tools/orterun/main.c:13
>
> I suspect that has something to do with using some non-msan-instrumented 
> system libraries, but how can I find out which library is the problem and 
> what to do to fix it? Any ideas?
>
> Regards,
>
> Michael
>
> P.S.: I compiled OpenMPI with the following configure command (the wildcard 
> blacklist was necessary because OpenMPI just has too many issues with msan…):
>
> printf "fun:*\n" > blacklist.txt && \
> CC=clang CXX=clang++ \
> CFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
> -fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
> CXXFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
> -fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
> ../openmpi-1.8.7/configure --prefix=/pds/opt/openmpi-1.8.7-clang-msan 
> --disable-mpi-fortran
> ___
> cfe-users mailing list
> cfe-users@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread Evgenii Stepanov via cfe-users
On Mon, Nov 23, 2015 at 12:59 AM, Schlottke-Lakemper, Michael via
cfe-users  wrote:
> Hi folks,
>
> I'm trying to set up our cluster tool chain to support clang’s memory
> sanitizer for our multiphysics simulation program, but I can’t get it to
> work :-/
>
> I started with a regularly compiled clang installation (with libcxx,
> libcxxabi, and libomp built in-tree). With this, I compiled all necessary
> third-party libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW, Parallel
> netCDF). Then, I compiled the libcxx/libcxxabi libraries with msan-support
> by checking out the llvm source and the libcxx/libcxxabi repos into the
> llvm/projects/ directory. I configured them with LLVM_USE_SANITIZER=Memory
> and put the msan-instrumented libraries in the LD_LIBRARY_PATH.
>
> Finally, I tried to compile our tool, ZFS, with the memory sanitizer enabled
> and linked against the msan-compiled third-party libraries as well as the
> msan-instrumented libcxx/libcxxabi libraries (by putting them in the
> LD_LIBRARY_PATH). However, here I failed: either at configure time or at
> compile time (after doing some LD_LIBRARY_PATH trickery), clang exits with
> the following error:
>
> /pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error:
> /pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined symbol:
> __msan_va_arg_overflow_size_tls
>
> Any suggestions as to what I am doing wrong? Should I put the
> msan-instrumented libcxx in the LD_LIBRARY_PATH after compilation only?

Yes, probably. In this case your compiler, which is not built with
MSan, picked up an instrumented libc++abi.
Sometimes it is convenient to set RPATH on all msan
libraries/executables and avoid LD_LIBRARY_PATH.


>
> Thanks a lot in advance
>
> Michael
>
>
> --
> Michael Schlottke-Lakemper
>
> Chair of Fluid Mechanics and Institute of Aerodynamics
> RWTH Aachen University
> Wüllnerstraße 5a
> 52062 Aachen
> Germany
>
> Phone: +49 (241) 80 95188
> Fax: +49 (241) 80 92257
> Mail: m.schlottke-lakem...@aia.rwth-aachen.de
> Web: http://www.aia.rwth-aachen.de
>
>
> ___
> cfe-users mailing list
> cfe-users@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users
>
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Segmentation fault with memory sanitizer and OpenMPI's mpirun

2015-11-23 Thread Schlottke-Lakemper, Michael via cfe-users
Hi Evgenii,

Just to clarify: you mean I should re-compile OpenMPI and add “-lutil” to 
LDFLAGS at configure time?

Yours

Michael

> On 23 Nov 2015, at 18:07 , Evgenii Stepanov  wrote:
> 
> This is caused by missing -lutil.
> FTR, http://llvm.org/viewvc/llvm-project?rev=245619&view=rev
> 
> On Mon, Nov 23, 2015 at 7:33 AM, Schlottke-Lakemper, Michael via
> cfe-users  wrote:
>> Hi folks,
>> 
>> When running “mpirun” of an msan-instrumented installation of OpenMPI, I get 
>> the following error:
>> 
>> $> mpirun -n 1 hostname
>> [aia308:48324] *** Process received signal ***
>> [aia308:48324] Signal: Segmentation fault (11)
>> [aia308:48324] Signal code: Address not mapped (1)
>> [aia308:48324] Failing at address: (nil)
>> [aia308:48324] [ 0] 
>> /pds/opt/openmpi-1.8.7-clang-msan/lib64/libopen-pal.so.6(+0x123ca1)[0x7f9e21c90ca1]
>> [aia308:48324] [ 1] mpirun[0x42b602]
>> [aia308:48324] [ 2] /lib64/libpthread.so.0(+0xf890)[0x7f9e21250890]
>> [aia308:48324] *** End of error message ***
>> Segmentation fault
>> 
>> Running it through gdb and printing the stacktrace, I get the following 
>> additional information:
>> $> gdb -ex r --args mpirun -n 1 hostname
>> #0  0x in ?? ()
>> #1  0x0042bd5b in __interceptor_openpty () at 
>> /pds/opt/install/llvm/llvm-20151121-r253770-src/projects/compiler-rt/lib/msan/msan_interceptors.cc:1355
>> #2  0x77705a7c in opal_openpty (amaster=0x7fffacf8, 
>> aslave=0x7fffacfc, name=0x0, termp=0x0, winp=0x0) at 
>> ../../../openmpi-1.8.7/opal/util/opal_pty.c:116
>> #3  0x77b31e9a in orte_iof_base_setup_prefork (opts=0x7ffface8) 
>> at ../../../../openmpi-1.8.7/orte/mca/iof/base/iof_base_setup.c:89
>> #4  0x7fffefea465a in odls_default_fork_local_proc 
>> (context=0x7240bb80, child=0x7240b880, environ_copy=0x7341ec00, 
>> jobdat=0x72209d80) at 
>> ../../../../../openmpi-1.8.7/orte/mca/odls/default/odls_default_module.c:860
>> #5  0x77b3cfb8 in orte_odls_base_default_launch_local (fd=-1, sd=4, 
>> cbdata=0x7062da80) at 
>> ../../../../openmpi-1.8.7/orte/mca/odls/base/odls_base_default_fns.c:1544
>> #6  0x777459d7 in event_process_active_single_queue 
>> (base=0x72a0fc80, activeq=0x7100bdc0) at 
>> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1367
>> #7  0x7773bb92 in event_process_active (base=0x72a0fc80) at 
>> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1437
>> #8  0x77738fd7 in opal_libevent2021_event_base_loop 
>> (base=0x72a0fc80, flags=1) at 
>> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1647
>> #9  0x0048f651 in orterun (argc=4, argv=0x7fffcea8) at 
>> ../../../../openmpi-1.8.7/orte/tools/orterun/orterun.c:1133
>> #10 0x0048b20c in main (argc=4, argv=0x7fffcea8) at 
>> ../../../../openmpi-1.8.7/orte/tools/orterun/main.c:13
>> 
>> I suspect that has something to do with using some non-msan-instrumented 
>> system libraries, but how can I find out which library is the problem and 
>> what to do to fix it? Any ideas?
>> 
>> Regards,
>> 
>> Michael
>> 
>> P.S.: I compiled OpenMPI with the following configure command (the wildcard 
>> blacklist was necessary because OpenMPI just has too many issues with msan…):
>> 
>> printf "fun:*\n" > blacklist.txt && \
>> CC=clang CXX=clang++ \
>> CFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
>> -fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
>> CXXFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
>> -fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
>> ../openmpi-1.8.7/configure --prefix=/pds/opt/openmpi-1.8.7-clang-msan 
>> --disable-mpi-fortran
>> ___
>> cfe-users mailing list
>> cfe-users@lists.llvm.org
>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users

___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread Schlottke-Lakemper, Michael via cfe-users

> On 23 Nov 2015, at 18:11 , Evgenii Stepanov  wrote:
> 
> On Mon, Nov 23, 2015 at 12:59 AM, Schlottke-Lakemper, Michael via
> cfe-users  wrote:
>> Hi folks,
>> 
>> I'm trying to set up our cluster tool chain to support clang’s memory
>> sanitizer for our multiphysics simulation program, but I can’t get it to
>> work :-/
>> 
>> I started with a regularly compiled clang installation (with libcxx,
>> libcxxabi, and libomp built in-tree). With this, I compiled all necessary
>> third-party libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW, Parallel
>> netCDF). Then, I compiled the libcxx/libcxxabi libraries with msan-support
>> by checking out the llvm source and the libcxx/libcxxabi repos into the
>> llvm/projects/ directory. I configured them with LLVM_USE_SANITIZER=Memory
>> and put the msan-instrumented libraries in the LD_LIBRARY_PATH.
>> 
>> Finally, I tried to compile our tool, ZFS, with the memory sanitizer enabled
>> and linked against the msan-compiled third-party libraries as well as the
>> msan-instrumented libcxx/libcxxabi libraries (by putting them in the
>> LD_LIBRARY_PATH). However, here I failed: either at configure time or at
>> compile time (after doing some LD_LIBRARY_PATH trickery), clang exits with
>> the following error:
>> 
>> /pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error:
>> /pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined symbol:
>> __msan_va_arg_overflow_size_tls
>> 
>> Any suggestions as to what I am doing wrong? Should I put the
>> msan-instrumented libcxx in the LD_LIBRARY_PATH after compilation only?
> 
> Yes, probably. In this case your compiler, which is not built with
> MSan, picked up an instrumented libc++abi.
> Sometimes it is convenient to set RPATH on all msan
> libraries/executables and avoid LD_LIBRARY_PATH.

Thanks for your answer. Unfortunately, I cannot avoid LD_LIBRARY_PATH as it 
includes our cluster-wide LLVM lib dir by default on all our hosts. Thus 
setting RPATH at compile/link time will have no effect as there is no way to 
make it overrule LD_LIBRARY_PATH afaik. Instrumenting all of Clang with MSan 
seems a bit overkill to me (and does not solve the RPATH/LD_LIBRARY_PATH issue 
at runtime anyways). I guess I’ll just have to live with manually changing the 
LD_LIBRARY_PATH after compilation then.

Michael
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread Evgenii Stepanov via cfe-users
Yes. Effectively you need two environments - one with
msan-instrumented libraries, and one without. Compilation should
happen in the latter (unless you want to instrument the compiler which
is not very productive), and resulting binaries should be run in the
former.

On Mon, Nov 23, 2015 at 10:50 AM, Schlottke-Lakemper, Michael
 wrote:
>
>> On 23 Nov 2015, at 18:11 , Evgenii Stepanov  
>> wrote:
>>
>> On Mon, Nov 23, 2015 at 12:59 AM, Schlottke-Lakemper, Michael via
>> cfe-users  wrote:
>>> Hi folks,
>>>
>>> I'm trying to set up our cluster tool chain to support clang’s memory
>>> sanitizer for our multiphysics simulation program, but I can’t get it to
>>> work :-/
>>>
>>> I started with a regularly compiled clang installation (with libcxx,
>>> libcxxabi, and libomp built in-tree). With this, I compiled all necessary
>>> third-party libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW, Parallel
>>> netCDF). Then, I compiled the libcxx/libcxxabi libraries with msan-support
>>> by checking out the llvm source and the libcxx/libcxxabi repos into the
>>> llvm/projects/ directory. I configured them with LLVM_USE_SANITIZER=Memory
>>> and put the msan-instrumented libraries in the LD_LIBRARY_PATH.
>>>
>>> Finally, I tried to compile our tool, ZFS, with the memory sanitizer enabled
>>> and linked against the msan-compiled third-party libraries as well as the
>>> msan-instrumented libcxx/libcxxabi libraries (by putting them in the
>>> LD_LIBRARY_PATH). However, here I failed: either at configure time or at
>>> compile time (after doing some LD_LIBRARY_PATH trickery), clang exits with
>>> the following error:
>>>
>>> /pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error:
>>> /pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined symbol:
>>> __msan_va_arg_overflow_size_tls
>>>
>>> Any suggestions as to what I am doing wrong? Should I put the
>>> msan-instrumented libcxx in the LD_LIBRARY_PATH after compilation only?
>>
>> Yes, probably. In this case your compiler, which is not built with
>> MSan, picked up an instrumented libc++abi.
>> Sometimes it is convenient to set RPATH on all msan
>> libraries/executables and avoid LD_LIBRARY_PATH.
>
> Thanks for your answer. Unfortunately, I cannot avoid LD_LIBRARY_PATH as it 
> includes our cluster-wide LLVM lib dir by default on all our hosts. Thus 
> setting RPATH at compile/link time will have no effect as there is no way to 
> make it overrule LD_LIBRARY_PATH afaik. Instrumenting all of Clang with MSan 
> seems a bit overkill to me (and does not solve the RPATH/LD_LIBRARY_PATH 
> issue at runtime anyways). I guess I’ll just have to live with manually 
> changing the LD_LIBRARY_PATH after compilation then.
>
> Michael
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Segmentation fault with memory sanitizer and OpenMPI's mpirun

2015-11-23 Thread Evgenii Stepanov via cfe-users
I think so. What probably happens here is MSan confuses the configure
script into thinking that openpty is available without -lutil, but
what's actually available is just a stub that tries calling the real
openpty and fails, unless libutil is linked.

On Mon, Nov 23, 2015 at 10:41 AM, Schlottke-Lakemper, Michael
 wrote:
> Hi Evgenii,
>
> Just to clarify: you mean I should re-compile OpenMPI and add “-lutil” to 
> LDFLAGS at configure time?
>
> Yours
>
> Michael
>
>> On 23 Nov 2015, at 18:07 , Evgenii Stepanov  
>> wrote:
>>
>> This is caused by missing -lutil.
>> FTR, http://llvm.org/viewvc/llvm-project?rev=245619&view=rev
>>
>> On Mon, Nov 23, 2015 at 7:33 AM, Schlottke-Lakemper, Michael via
>> cfe-users  wrote:
>>> Hi folks,
>>>
>>> When running “mpirun” of an msan-instrumented installation of OpenMPI, I 
>>> get the following error:
>>>
>>> $> mpirun -n 1 hostname
>>> [aia308:48324] *** Process received signal ***
>>> [aia308:48324] Signal: Segmentation fault (11)
>>> [aia308:48324] Signal code: Address not mapped (1)
>>> [aia308:48324] Failing at address: (nil)
>>> [aia308:48324] [ 0] 
>>> /pds/opt/openmpi-1.8.7-clang-msan/lib64/libopen-pal.so.6(+0x123ca1)[0x7f9e21c90ca1]
>>> [aia308:48324] [ 1] mpirun[0x42b602]
>>> [aia308:48324] [ 2] /lib64/libpthread.so.0(+0xf890)[0x7f9e21250890]
>>> [aia308:48324] *** End of error message ***
>>> Segmentation fault
>>>
>>> Running it through gdb and printing the stacktrace, I get the following 
>>> additional information:
>>> $> gdb -ex r --args mpirun -n 1 hostname
>>> #0  0x in ?? ()
>>> #1  0x0042bd5b in __interceptor_openpty () at 
>>> /pds/opt/install/llvm/llvm-20151121-r253770-src/projects/compiler-rt/lib/msan/msan_interceptors.cc:1355
>>> #2  0x77705a7c in opal_openpty (amaster=0x7fffacf8, 
>>> aslave=0x7fffacfc, name=0x0, termp=0x0, winp=0x0) at 
>>> ../../../openmpi-1.8.7/opal/util/opal_pty.c:116
>>> #3  0x77b31e9a in orte_iof_base_setup_prefork (opts=0x7ffface8) 
>>> at ../../../../openmpi-1.8.7/orte/mca/iof/base/iof_base_setup.c:89
>>> #4  0x7fffefea465a in odls_default_fork_local_proc 
>>> (context=0x7240bb80, child=0x7240b880, environ_copy=0x7341ec00, 
>>> jobdat=0x72209d80) at 
>>> ../../../../../openmpi-1.8.7/orte/mca/odls/default/odls_default_module.c:860
>>> #5  0x77b3cfb8 in orte_odls_base_default_launch_local (fd=-1, sd=4, 
>>> cbdata=0x7062da80) at 
>>> ../../../../openmpi-1.8.7/orte/mca/odls/base/odls_base_default_fns.c:1544
>>> #6  0x777459d7 in event_process_active_single_queue 
>>> (base=0x72a0fc80, activeq=0x7100bdc0) at 
>>> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1367
>>> #7  0x7773bb92 in event_process_active (base=0x72a0fc80) at 
>>> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1437
>>> #8  0x77738fd7 in opal_libevent2021_event_base_loop 
>>> (base=0x72a0fc80, flags=1) at 
>>> ../../../../../../openmpi-1.8.7/opal/mca/event/libevent2021/libevent/event.c:1647
>>> #9  0x0048f651 in orterun (argc=4, argv=0x7fffcea8) at 
>>> ../../../../openmpi-1.8.7/orte/tools/orterun/orterun.c:1133
>>> #10 0x0048b20c in main (argc=4, argv=0x7fffcea8) at 
>>> ../../../../openmpi-1.8.7/orte/tools/orterun/main.c:13
>>>
>>> I suspect that has something to do with using some non-msan-instrumented 
>>> system libraries, but how can I find out which library is the problem and 
>>> what to do to fix it? Any ideas?
>>>
>>> Regards,
>>>
>>> Michael
>>>
>>> P.S.: I compiled OpenMPI with the following configure command (the wildcard 
>>> blacklist was necessary because OpenMPI just has too many issues with 
>>> msan…):
>>>
>>> printf "fun:*\n" > blacklist.txt && \
>>> CC=clang CXX=clang++ \
>>> CFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
>>> -fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
>>> CXXFLAGS="-g -fsanitize=memory -fno-omit-frame-pointer 
>>> -fsanitize-memory-track-origins -fsanitize-blacklist=`pwd`/blacklist.txt" \
>>> ../openmpi-1.8.7/configure --prefix=/pds/opt/openmpi-1.8.7-clang-msan 
>>> --disable-mpi-fortran
>>> ___
>>> cfe-users mailing list
>>> cfe-users@lists.llvm.org
>>> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users
>
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread Schlottke-Lakemper, Michael via cfe-users

> On 23 Nov 2015, at 20:01 , Evgenii Stepanov  wrote:
> 
> Yes. Effectively you need two environments - one with
> msan-instrumented libraries, and one without. Compilation should
> happen in the latter (unless you want to instrument the compiler which
> is not very productive), and resulting binaries should be run in the
> former.
> 

Ah, thanks a lot. Just to be sure: is it generally OK to compile & link in an 
environment where ld would e.g. pick up an uninstrumented version of the 
OpenMPI libraries as long as the correct, instrumented OpenMPI library is in 
LD_LIBRARY_PATH at runtime?

Sorry for having to ask again

Michael


> On Mon, Nov 23, 2015 at 10:50 AM, Schlottke-Lakemper, Michael
>  wrote:
>> 
>>> On 23 Nov 2015, at 18:11 , Evgenii Stepanov  
>>> wrote:
>>> 
>>> On Mon, Nov 23, 2015 at 12:59 AM, Schlottke-Lakemper, Michael via
>>> cfe-users  wrote:
 Hi folks,
 
 I'm trying to set up our cluster tool chain to support clang’s memory
 sanitizer for our multiphysics simulation program, but I can’t get it to
 work :-/
 
 I started with a regularly compiled clang installation (with libcxx,
 libcxxabi, and libomp built in-tree). With this, I compiled all necessary
 third-party libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW, Parallel
 netCDF). Then, I compiled the libcxx/libcxxabi libraries with msan-support
 by checking out the llvm source and the libcxx/libcxxabi repos into the
 llvm/projects/ directory. I configured them with LLVM_USE_SANITIZER=Memory
 and put the msan-instrumented libraries in the LD_LIBRARY_PATH.
 
 Finally, I tried to compile our tool, ZFS, with the memory sanitizer 
 enabled
 and linked against the msan-compiled third-party libraries as well as the
 msan-instrumented libcxx/libcxxabi libraries (by putting them in the
 LD_LIBRARY_PATH). However, here I failed: either at configure time or at
 compile time (after doing some LD_LIBRARY_PATH trickery), clang exits with
 the following error:
 
 /pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error:
 /pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined symbol:
 __msan_va_arg_overflow_size_tls
 
 Any suggestions as to what I am doing wrong? Should I put the
 msan-instrumented libcxx in the LD_LIBRARY_PATH after compilation only?
>>> 
>>> Yes, probably. In this case your compiler, which is not built with
>>> MSan, picked up an instrumented libc++abi.
>>> Sometimes it is convenient to set RPATH on all msan
>>> libraries/executables and avoid LD_LIBRARY_PATH.
>> 
>> Thanks for your answer. Unfortunately, I cannot avoid LD_LIBRARY_PATH as it 
>> includes our cluster-wide LLVM lib dir by default on all our hosts. Thus 
>> setting RPATH at compile/link time will have no effect as there is no way to 
>> make it overrule LD_LIBRARY_PATH afaik. Instrumenting all of Clang with MSan 
>> seems a bit overkill to me (and does not solve the RPATH/LD_LIBRARY_PATH 
>> issue at runtime anyways). I guess I’ll just have to live with manually 
>> changing the LD_LIBRARY_PATH after compilation then.
>> 
>> Michael

___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread MeldaProduction via cfe-users
unsubscribe

Cheers!
Vojtech
www.meldaproduction.com
Facebook , Twitter
, Youtube


2015-11-23 20:13 GMT+01:00 Schlottke-Lakemper, Michael via cfe-users <
cfe-users@lists.llvm.org>:

>
> > On 23 Nov 2015, at 20:01 , Evgenii Stepanov 
> wrote:
> >
> > Yes. Effectively you need two environments - one with
> > msan-instrumented libraries, and one without. Compilation should
> > happen in the latter (unless you want to instrument the compiler which
> > is not very productive), and resulting binaries should be run in the
> > former.
> >
>
> Ah, thanks a lot. Just to be sure: is it generally OK to compile & link in
> an environment where ld would e.g. pick up an uninstrumented version of the
> OpenMPI libraries as long as the correct, instrumented OpenMPI library is
> in LD_LIBRARY_PATH at runtime?
>
> Sorry for having to ask again
>
> Michael
>
>
> > On Mon, Nov 23, 2015 at 10:50 AM, Schlottke-Lakemper, Michael
> >  wrote:
> >>
> >>> On 23 Nov 2015, at 18:11 , Evgenii Stepanov 
> wrote:
> >>>
> >>> On Mon, Nov 23, 2015 at 12:59 AM, Schlottke-Lakemper, Michael via
> >>> cfe-users  wrote:
>  Hi folks,
> 
>  I'm trying to set up our cluster tool chain to support clang’s memory
>  sanitizer for our multiphysics simulation program, but I can’t get it
> to
>  work :-/
> 
>  I started with a regularly compiled clang installation (with libcxx,
>  libcxxabi, and libomp built in-tree). With this, I compiled all
> necessary
>  third-party libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW,
> Parallel
>  netCDF). Then, I compiled the libcxx/libcxxabi libraries with
> msan-support
>  by checking out the llvm source and the libcxx/libcxxabi repos into
> the
>  llvm/projects/ directory. I configured them with
> LLVM_USE_SANITIZER=Memory
>  and put the msan-instrumented libraries in the LD_LIBRARY_PATH.
> 
>  Finally, I tried to compile our tool, ZFS, with the memory sanitizer
> enabled
>  and linked against the msan-compiled third-party libraries as well as
> the
>  msan-instrumented libcxx/libcxxabi libraries (by putting them in the
>  LD_LIBRARY_PATH). However, here I failed: either at configure time or
> at
>  compile time (after doing some LD_LIBRARY_PATH trickery), clang exits
> with
>  the following error:
> 
>  /pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error:
>  /pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined
> symbol:
>  __msan_va_arg_overflow_size_tls
> 
>  Any suggestions as to what I am doing wrong? Should I put the
>  msan-instrumented libcxx in the LD_LIBRARY_PATH after compilation
> only?
> >>>
> >>> Yes, probably. In this case your compiler, which is not built with
> >>> MSan, picked up an instrumented libc++abi.
> >>> Sometimes it is convenient to set RPATH on all msan
> >>> libraries/executables and avoid LD_LIBRARY_PATH.
> >>
> >> Thanks for your answer. Unfortunately, I cannot avoid LD_LIBRARY_PATH
> as it includes our cluster-wide LLVM lib dir by default on all our hosts.
> Thus setting RPATH at compile/link time will have no effect as there is no
> way to make it overrule LD_LIBRARY_PATH afaik. Instrumenting all of Clang
> with MSan seems a bit overkill to me (and does not solve the
> RPATH/LD_LIBRARY_PATH issue at runtime anyways). I guess I’ll just have to
> live with manually changing the LD_LIBRARY_PATH after compilation then.
> >>
> >> Michael
>
> ___
> cfe-users mailing list
> cfe-users@lists.llvm.org
> http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users
>
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users


Re: [cfe-users] Errors when trying to use libcxx/libcxxabi with memory sanitizer

2015-11-23 Thread Evgenii Stepanov via cfe-users
It works in general.

On Mon, Nov 23, 2015 at 11:13 AM, Schlottke-Lakemper, Michael
 wrote:
>
>> On 23 Nov 2015, at 20:01 , Evgenii Stepanov  
>> wrote:
>>
>> Yes. Effectively you need two environments - one with
>> msan-instrumented libraries, and one without. Compilation should
>> happen in the latter (unless you want to instrument the compiler which
>> is not very productive), and resulting binaries should be run in the
>> former.
>>
>
> Ah, thanks a lot. Just to be sure: is it generally OK to compile & link in an 
> environment where ld would e.g. pick up an uninstrumented version of the 
> OpenMPI libraries as long as the correct, instrumented OpenMPI library is in 
> LD_LIBRARY_PATH at runtime?
>
> Sorry for having to ask again
>
> Michael
>
>
>> On Mon, Nov 23, 2015 at 10:50 AM, Schlottke-Lakemper, Michael
>>  wrote:
>>>
 On 23 Nov 2015, at 18:11 , Evgenii Stepanov  
 wrote:

 On Mon, Nov 23, 2015 at 12:59 AM, Schlottke-Lakemper, Michael via
 cfe-users  wrote:
> Hi folks,
>
> I'm trying to set up our cluster tool chain to support clang’s memory
> sanitizer for our multiphysics simulation program, but I can’t get it to
> work :-/
>
> I started with a regularly compiled clang installation (with libcxx,
> libcxxabi, and libomp built in-tree). With this, I compiled all necessary
> third-party libraries with “-O1 -fsanitize=memory” (OpenMPI, FFTW, 
> Parallel
> netCDF). Then, I compiled the libcxx/libcxxabi libraries with msan-support
> by checking out the llvm source and the libcxx/libcxxabi repos into the
> llvm/projects/ directory. I configured them with LLVM_USE_SANITIZER=Memory
> and put the msan-instrumented libraries in the LD_LIBRARY_PATH.
>
> Finally, I tried to compile our tool, ZFS, with the memory sanitizer 
> enabled
> and linked against the msan-compiled third-party libraries as well as the
> msan-instrumented libcxx/libcxxabi libraries (by putting them in the
> LD_LIBRARY_PATH). However, here I failed: either at configure time or at
> compile time (after doing some LD_LIBRARY_PATH trickery), clang exits with
> the following error:
>
> /pds/opt/llvm-20151121-r253770/bin/clang++: symbol lookup error:
> /pds/opt/libcxx-msan-20151121-r253770/lib/libc++abi.so.1: undefined 
> symbol:
> __msan_va_arg_overflow_size_tls
>
> Any suggestions as to what I am doing wrong? Should I put the
> msan-instrumented libcxx in the LD_LIBRARY_PATH after compilation only?

 Yes, probably. In this case your compiler, which is not built with
 MSan, picked up an instrumented libc++abi.
 Sometimes it is convenient to set RPATH on all msan
 libraries/executables and avoid LD_LIBRARY_PATH.
>>>
>>> Thanks for your answer. Unfortunately, I cannot avoid LD_LIBRARY_PATH as it 
>>> includes our cluster-wide LLVM lib dir by default on all our hosts. Thus 
>>> setting RPATH at compile/link time will have no effect as there is no way 
>>> to make it overrule LD_LIBRARY_PATH afaik. Instrumenting all of Clang with 
>>> MSan seems a bit overkill to me (and does not solve the 
>>> RPATH/LD_LIBRARY_PATH issue at runtime anyways). I guess I’ll just have to 
>>> live with manually changing the LD_LIBRARY_PATH after compilation then.
>>>
>>> Michael
>
___
cfe-users mailing list
cfe-users@lists.llvm.org
http://lists.llvm.org/cgi-bin/mailman/listinfo/cfe-users