Public bug reported:

$ lsb_release -rd
Description:    Ubuntu 14.04 LTS
Release:        14.04

$ uname -a
Linux tony1 3.13.0-19-generic #40-Ubuntu SMP Mon Mar 24 02:36:13 UTC 2014 
ppc64le ppc64le ppc64le GNU/Linux

$ apt-cache policy openjdk-7-jre-lib
openjdk-7-jre-lib:
  Installed: 7u51-2.4.6-1ubuntu4
  Candidate: 7u51-2.4.6-1ubuntu4
  Version table:
 *** 7u51-2.4.6-1ubuntu4 0
        500 http://ports.ubuntu.com/ubuntu-ports/ trusty/universe ppc64el 
Packages
        100 /var/lib/dpkg/status

$ apt-cache policy openjdk-7-jre
openjdk-7-jre:
  Installed: 7u51-2.4.6-1ubuntu4
  Candidate: 7u51-2.4.6-1ubuntu4
  Version table:
 *** 7u51-2.4.6-1ubuntu4 0
        500 http://ports.ubuntu.com/ubuntu-ports/ trusty/main ppc64el Packages
        100 /var/lib/dpkg/status

$ apt-cache policy openjdk-7-jdk
openjdk-7-jdk:
  Installed: 7u51-2.4.6-1ubuntu4
  Candidate: 7u51-2.4.6-1ubuntu4
  Version table:
 *** 7u51-2.4.6-1ubuntu4 0
        500 http://ports.ubuntu.com/ubuntu-ports/ trusty/main ppc64el Packages
        100 /var/lib/dpkg/status


I'm running Hadoop 2.4.0 tests on Ubuntu 14.04 on PPC64-LE .
I've run about 4 times the Hadoop 2.4.0 tests and got 7 cores, 3 with Java6 
(6.0_30-b30) and now 4 with Java7 (7.0_51-b31).
Before Hadoop 2.4.0 , I was testing Hadoop 2.2.0 and got only 1 core 
(1.7.0_51-b31) with more tests runs.
I'm compiling/testing Hadoop without Hadoop JNI code ( no -Dnative option).
I've put the ulimit at some time and got 1 (big) core file.

All five Java7 cores look the same:
- appearing in:
   hadoop-hdfs-project/hadoop-hdfs
- dealing with:
Current thread (.....):  JavaThread "FSImageSaver for :
  
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name2
  
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/name1

However, the cores made different Hadoop tests to crash.
Running several times all the Hadoop tests shown that JVM crashes are random: 
the same elementary test may crash the JVM or run perfectly or output a Failure 
when run several times.


hadoop-hdfs-project/hadoop-hdfs/hs_err_pid7832.log  :

# A fatal error has been detected by the Java Runtime Environment:
#
#  Internal Error (os_linux_zero.cpp:254), pid=7832, tid=70365657166240
#  fatal error: caught unhandled signal 11
#
# JRE version: OpenJDK Runtime Environment (7.0_51-b31) (build 1.7.0_51-b31)
# Java VM: OpenJDK 64-Bit Zero VM (24.51-b03 interpreted mode linux-ppc64le )

---------------  T H R E A D  ---------------

Current thread (0x00003fff946fdee0):  JavaThread "FSImageSaver for
/home/tony/HADOOP/hadoop-2.4.0-src/hadoop-hdfs-project/hadoop-
hdfs/target/test/data/dfs/name1 of type IMAGE_AND_EDITS"
[_thread_in_Java, id=7843, stack(0x00003fff47e60000,0x00003fff48000000)]

Stack: [0x00003fff47e60000,0x00003fff48000000],  sp=0x00003fff47f353e0,
free space=852k

** Affects: openjdk-7 (Ubuntu)
     Importance: Undecided
         Status: New

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1312598

Title:
  fatal error: caught unhandled signal 11

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openjdk-7/+bug/1312598/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to