Looks jar(hadoop-hdfs-2.8.0.jar) is missing in the classpath.Please check the
client classpath.
Might be there are no permissions OR missed the this jar while copying..?
Reference:
org.apache.hadoop.fs.FileSystem#getFileSystemClass
if (clazz == null) {
throw new UnsupportedFileSystemException("No FileSystem for scheme "
+ "\"" + scheme + "\"");
}
--Brahma Reddy Battula
From: omprakash [mailto:[email protected]]
Sent: 31 July 2017 18:17
To: 'user'
Subject: No FileSystem for scheme: hdfs when using hadoop-2.8.0 jars
Hi all,
I have moved my Hadoop-2.7.0 cluster to 2.8.0 version. I have a client
application that uses hdfs to get and store file. But after replacing the 2.7.0
jars with new jars(version 2.8.0) I am facing below exception
Exception in thread "main" java.io.IOException: No FileSystem for scheme: hdfs
at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2798)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2809)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:100)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2848)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2830)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:389)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:181)
at
HadoopTestStubs.HadoopHighAvailabilityTest.main(HadoopHighAvailabilityTest.java:31)
Below is the code I am trying to execute
import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
public class HadoopHighAvailabilityTest {
public static void main(String[] args) throws IOException {
Configuration conf = new Configuration(false);
conf.set("fs.defaultFS", "hdfs://HdfsCluster");
conf.set("fs.default.name", conf.get("fs.defaultFS"));
conf.set("dfs.nameservices","HdfsCluster");
conf.set("dfs.ha.namenodes.HdfsCluster", "namenode1,namenode2");
conf.set("dfs.namenode.rpc-address.HdfsCluster.namenode1","node1:8020");
conf.set("dfs.namenode.rpc-address.HdfsCluster.namenode2",
"node2:8020");
conf.set("dfs.client.failover.proxy.provider.HdfsCluster","org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider");
printConfigurations(conf);
FileSystem fs = FileSystem.get(conf); // Exception here
}
}
What am I doing wrong here? I have checked the core-site.xml file for release
2.8.0. I can see there are change in FileSystem implementation but couldn't
figure out why above code is not working.
Please help.
Regards
Omprakash Paliwal
-------------------------------------------------------------------------------------------------------------------------------
[ C-DAC is on Social-Media too. Kindly follow us at:
Facebook: https://www.facebook.com/CDACINDIA & Twitter: @cdacindia ]
This e-mail is for the sole use of the intended recipient(s) and may
contain confidential and privileged information. If you are not the
intended recipient, please contact the sender by reply e-mail and destroy
all copies and the original message. Any unauthorized review, use,
disclosure, dissemination, forwarding, printing or copying of this email
is strictly prohibited and appropriate legal action will be taken.
-------------------------------------------------------------------------------------------------------------------------------