Hi Ayush,
Yes, I am able to access namenode UI.
Below the value of namespace
<property>
<name>dfs.nameservices</name>
<value>hdfs</value>
</property>
And below the defaultFS value
<property>
<name>fs.defaultFS</name>
<value>hdfs://hdfs</value>
</property>
Regards,
Kamaraj
InfoSight - SRE
From: Ayush Saxena<mailto:[email protected]>
Sent: Sunday, August 23, 2020 8:46 PM
To: Muthupandiyan, Kamaraj<mailto:[email protected]>
Cc: Mingliang Liu<mailto:[email protected]>;
[email protected]<mailto:[email protected]>
Subject: Re: Error while running "hdfs fs" commands
Are you able to access namenode UI? if so, check what is the value for
"Namespace:", say it is mycluster then in fs.defaultFs configure
hdfs://mycluster
The value might be there in hdfs-site.xml as well, under dfs.nameservices
-Ayush
On Sun, 23 Aug 2020 at 20:20, Muthupandiyan, Kamaraj
<[email protected]<mailto:[email protected]>> wrote:
Hi Ayush,
Thanks for your reply, yes we followed the same document and applied the
config. For our setup, we have configured HA and its working fine. But, we are
getting the following error while executing the the commands.
-ls: java.net.UnknownHostException: hdfs
-mkdir: java.net.UnknownHostException: hdfs
Regards,
Kamaraj
InfoSight - SRE
From: Ayush Saxena<mailto:[email protected]>
Sent: Sunday, August 23, 2020 7:17 PM
To: Muthupandiyan, Kamaraj<mailto:[email protected]>
Cc: Mingliang Liu<mailto:[email protected]>;
[email protected]<mailto:[email protected]>
Subject: Re: Error while running "hdfs fs" commands
Hi Kamaraj,
If you are trying to setup an HA cluster, you need couple of more configs as
well, You can follow this document, this should answer all your doubts :
https://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Deployment<https://hadoop.apache.org/docs/r3.3.0/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html#Deployment>
-Ayush
On Sun, 23 Aug 2020 at 18:53, Muthupandiyan, Kamaraj
<[email protected]<mailto:[email protected]>> wrote:
Hi Mingliang Liu,
Thank you for your reply.
As I am completely new to setting up the Hadoop cluster I am not sure about the
value of fs.defaultFS. I have set the below one as the value, Could you please
help me with the correct value.
<property>
<name>fs.defaultFS</name>
<value>hdfs://hdfs</value>
</property>
As a additional information, we have two name nodes in our cluster.
Regards,
Kamaraj
InfoSight - SRE
From: Mingliang Liu<mailto:[email protected]>
Sent: Friday, August 21, 2020 11:14 PM
To: Muthupandiyan, Kamaraj<mailto:[email protected]>
Cc: [email protected]<mailto:[email protected]>
Subject: Re: Error while running "hdfs fs" commands
Seems that you have a wrong fs.defaultFS configuration in your core-site.xml
For the second command you should follow the user manual. The input is simply
not valid. Try something like:
hadoop fs -ls
hdfs://sym-hdfsnn1.lvs.nimblestorage.com/user<http://sym-hdfsnn1.lvs.nimblestorage.com/user>
or
hadoop fs -ls file:///tmp
On Fri, Aug 21, 2020 at 9:07 AM Muthupandiyan, Kamaraj
<[email protected]<mailto:[email protected]>> wrote:
Hi Team,
Whenever I am executing the hdfs fs command and I am getting below error,
please help me with this.
[hdfs@hdfsnn1 ~]$ hdfs dfs -ls /user
-ls: java.net.UnknownHostException: hdfs
[hdfs@sym-hdfsnn1 ~]$ hadoop fs -ls / file:///%7Chdfs:hdfsnn1:8020/
-bash:
hdfs:sym-hdfsnn1.lvs.nimblestorage.com:8020/<http://sym-hdfsnn1.lvs.nimblestorage.com:8020/>:
No such file or directory
-ls: java.net.UnknownHostException: hdfs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] <path> ...]
[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
[-du [-s] [-h] [-x] <path> ...]
[-expunge]
[-find <path> ... <expression> ...]
[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
[-help [cmd ...]]
[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
Regards,
Kamaraj
InfoSight - SRE
--
L