Hello Everyone,

I am planning to upgrade our Hadoop from v 1.0.4 to 2.7.3 together with hbase 0.94 to 1.3. Does anyone know of some steps that can help me?

Thanks in advance,

Donald Nelson


On 05/18/2017 12:39 PM, Bhushan Pathak wrote:
What configuration do you want me to check? Each of the three nodes can access each other via password-less SSH, can ping each other's IP.

Thanks
Bhushan Pathak

Thanks
Bhushan Pathak

On Wed, May 17, 2017 at 10:11 PM, Sidharth Kumar <[email protected] <mailto:[email protected]>> wrote:

    Hi,

    The error you mentioned below " 'Name or service not known'" means
    servers not able to communicate to each other. Check network
    configurations.

    Sidharth
    Mob: +91 8197555599
    LinkedIn: www.linkedin.com/in/sidharthkumar2792
    <http://www.linkedin.com/in/sidharthkumar2792>

    On 17-May-2017 12:13 PM, "Bhushan Pathak"
    <[email protected] <mailto:[email protected]>>
    wrote:

        Apologies for the delayed reply, was away due to some personal
        issues.

        I tried the telnet command as well, but no luck. I get the
        response that 'Name or service not known'

        Thanks
        Bhushan Pathak

        Thanks
        Bhushan Pathak

        On Wed, May 3, 2017 at 7:48 AM, Sidharth Kumar
        <[email protected]
        <mailto:[email protected]>> wrote:

            Can you check if the ports are opened by running telnet
            command.
            Run below command from source machine to destination
            machine and check if this help

            $telnet <IP address> <port number>
            Ex: $telnet 192.168.1.60 9000


            Let's Hadooping....!

            Bests
            Sidharth
            Mob: +91 8197555599
            LinkedIn: www.linkedin.com/in/sidharthkumar2792
            <http://www.linkedin.com/in/sidharthkumar2792>

            On 28-Apr-2017 10:32 AM, "Bhushan Pathak"
            <[email protected]
            <mailto:[email protected]>> wrote:

                Hello All,

                1. The slave & master can ping each other as well as
                use passwordless SSH
                2. The actual IP starts with 10.x.x.x, I have put in
                the config file as I cannot share  the actual IP
                3. The namenode is formatted. I executed the 'hdfs
                namenode -format' again just to rule out the possibility
                4. I did not configure anything in the master file. I
                don;t think Hadoop 2.7.3 has a master file to be
                configured
                5. The netstat command [sudo netstat -tulpn | grep
                '51150'] does not give any output.

                Even if I change  the port number to a different one,
                say 52220, 50000, I still get the same error.

                Thanks
                Bhushan Pathak

                Thanks
                Bhushan Pathak

                On Fri, Apr 28, 2017 at 7:52 AM, Lei Cao
                <[email protected]
                <mailto:[email protected]>> wrote:

                    Hi Mr. Bhushan,

                    Have you tried to format namenode?
                    Here's the command:
                    hdfs namenode -format

                    I've encountered such problem as namenode cannot
                    be started. This command line easily fixed my problem.

                    Hope this can help you.

                    Sincerely,
                    Lei Cao


                    On Apr 27, 2017, at 12:09, Brahma Reddy Battula
                    <[email protected]
                    <mailto:[email protected]>> wrote:

                    *Please check “hostname –i” .*

                    **

                    **

                    *1)**What’s configured in the “master” file.(you
                    shared only slave file).?*

                    **

                    *2)**Can you able to “ping master”?*

                    **

                    *3)**Can you configure like this check once..?*

                    *                1.1.1.1 master*

                    Regards

                    Brahma Reddy Battula

                    *From:*Bhushan Pathak
                    [mailto:[email protected]
                    <mailto:[email protected]>]
                    *Sent:* 27 April 2017 18:16
                    *To:* Brahma Reddy Battula
                    *Cc:* [email protected]
                    <mailto:[email protected]>
                    *Subject:* Re: Hadoop 2.7.3 cluster namenode not
                    starting

                    Some additional info -

                    OS: CentOS 7

                    RAM: 8GB

                    Thanks

                    Bhushan Pathak


                    Thanks

                    Bhushan Pathak

                    On Thu, Apr 27, 2017 at 3:34 PM, Bhushan Pathak
                    <[email protected]
                    <mailto:[email protected]>> wrote:

                        Yes, I'm running the command on the master node.

                        Attached are the config files & the hosts
                        file. I have updated the IP address only as
                        per company policy, so that original IP
                        addresses are not shared.

                        The same config files & hosts file exist on
                        all 3 nodes.

                        Thanks

                        Bhushan Pathak


                        Thanks

                        Bhushan Pathak

                        On Thu, Apr 27, 2017 at 3:02 PM, Brahma Reddy
                        Battula <[email protected]
                        <mailto:[email protected]>> wrote:

                            Are you sure that you are starting in
                            same machine (master)..?

                            Please share “/etc/hosts” and
                            configuration files..

                            Regards

                            Brahma Reddy Battula

                            *From:*Bhushan Pathak
                            [mailto:[email protected]
                            <mailto:[email protected]>]
                            *Sent:* 27 April 2017 17:18
                            *To:* [email protected]
                            <mailto:[email protected]>
                            *Subject:* Fwd: Hadoop 2.7.3 cluster
                            namenode not starting

                            Hello

                            I have a 3-node cluster where I have
                            installed hadoop 2.7.3. I have updated
                            core-site.xml, mapred-site.xml, slaves,
                            hdfs-site.xml, yarn-site.xml,
                            hadoop-env.sh files with basic settings
                            on all 3 nodes.

                            When I execute start-dfs.sh on the master
                            node, the namenode does not start. The
                            logs contain the following error -

                            2017-04-27 14:17:57,166 ERROR
                            org.apache.hadoop.hdfs.server.namenode.NameNode:
                            Failed to start namenode.

                            java.net.BindException: Problem binding
                            to [master:51150] java.net.BindException:
                            Cannot assign requested address; For more
                            details see:
                            http://wiki.apache.org/hadoop/BindException
                            <http://wiki.apache.org/hadoop/BindException>

                                  at
                            
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
                            Method)

                                  at
                            
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)

                                  at
                            
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

                                  at
                            
java.lang.reflect.Constructor.newInstance(Constructor.java:423)

                                  at org.apache.hadoop.net
                            
<http://org.apache.hadoop.net>.NetUtils.wrapWithMessage(NetUtils.java:792)

                                  at org.apache.hadoop.net
                            
<http://org.apache.hadoop.net>.NetUtils.wrapException(NetUtils.java:721)

                                  at
                            org.apache.hadoop.ipc.Server.bind(Server.java:425)

                                  at
                            
org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:574)

                                  at
                            
org.apache.hadoop.ipc.Server.<init>(Server.java:2215)

                                  at
                            
org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:951)

                                  at
                            
org.apache.hadoop.ipc.ProtobufRpcEngine$Server.<init>(ProtobufRpcEngine.java:534)

                                  at
                            
org.apache.hadoop.ipc.ProtobufRpcEngine.getServer(ProtobufRpcEngine.java:509)

                                  at
                            
org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:796)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:345)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:674)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:647)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:812)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:796)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1493)

                                  at
                            
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1559)

                            Caused by: java.net.BindException: Cannot
                            assign requested address

                                  at sun.nio.ch.Net.bind0(Native Method)

                                  at sun.nio.ch.Net.bind(Net.java:433)

                                  at sun.nio.ch.Net.bind(Net.java:425)

                                  at sun.nio.ch
                            
<http://sun.nio.ch>.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223)

                                  at sun.nio.ch
                            
<http://sun.nio.ch>.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74)

                                  at
                            org.apache.hadoop.ipc.Server.bind(Server.java:408)

                                  ... 13 more

                            2017-04-27 14:17:57,171 INFO
                            org.apache.hadoop.util.ExitUtil: Exiting
                            with status 1

                            2017-04-27 14:17:57,176 INFO
                            org.apache.hadoop.hdfs.server.namenode.NameNode:
                            SHUTDOWN_MSG:

                            
/************************************************************

                            SHUTDOWN_MSG: Shutting down NameNode at
                            master/1.1.1.1 <http://1.1.1.1>

                            
************************************************************/

                            I have changed the port number multiple
                            times, every time I get the same error.
                            How do I get past this?

                            Thanks

                            Bhushan Pathak






Reply via email to