let's  default core-site.xml, hdfs-site.xml log4j.properties (is not your 
customized *.xml) locate like below tree.
and re-build jar using 'mvn clean package'.  
finally resources(*.xml *.properties) will locate into a jar.
 .
├── pom.xml 
└── src
    └── main
        └── resources
            ├── core-site.xml
            ├── hdfs-site.xml
            └── log4j.properties
 
and you can use a jar:
 
java -jar ${build_jar} -conf /opt/hadoop/etc/hdfs-site.xml -ls 
hdfs://${nameservices}/user/home


 
-----Original Message-----
From: "F21"<[email protected]> 
To: "권병창"<[email protected]>; 
Cc: 
Sent: 2016-08-30 (화) 12:47:16
Subject: Re: Installing just the HDFS client
 

  
    
  
  
    I am still getting the same error
      despite setting the variable. In my case, hdfs-site.xml and
      core-site.xml is customized at run time, so cannot be compiled
      into the jar.

      

      My core-site.xml and hdfs-site.xml is located in /opt/hbase/conf

      

      Here's what I did:

      

      bash-4.3# export HADOOP_CONF_DIR="/opt/hbase/conf"

      bash-4.3# echo $HADOOP_CONF_DIR

      /opt/hbase/conf

      bash-4.3# cd /opt/hadoop/

      bash-4.3# java -jar hdfs-fs.jar

      log4j:WARN No appenders could be found for logger
      (org.apache.hadoop.util.Shell).

      log4j:WARN Please initialize the log4j system properly.

      log4j:WARN See
      http://logging.apache.org/log4j/1.2/faq.html#noconfig for more
      info.

      Exception in thread "main" java.lang.RuntimeException:
      core-site.xml not found

              at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2566)

              at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2492)

              at
      org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)

              at
      org.apache.hadoop.conf.Configuration.set(Configuration.java:1143)

              at
      org.apache.hadoop.conf.Configuration.set(Configuration.java:1115)

              at
      org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1451)

              at
org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)

              at
org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)

              at
org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)

              at
org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)

              at
      org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)

              at
      org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

              at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

      

      On 30/08/2016 11:20 AM, 권병창 wrote:

    
    
      
        set below environmental variable.
         
        
        export HADOOP_CONF_DIR=/opt/hadoop/etc
        

          or
         
        
        hdfs-site.xml core-site.xml can locate in the jar.
        

          
            

            -----Original Message-----
            

            보낸 사람: F21 <[email protected]>
            

            받는 사람: 권병창 <[email protected]>
            

            참조: 

            날짜: 2016. 8. 30 오전 9:12:58
            

            제목: Re: Installing just the HDFS client
            

            

          

          
            
            Hi,

              

              Thanks for the pom.xml. I was able to build it
              successfully. How do I point it to the config files? My
              core-site.xml and hdfs-site.xml are located in
              /opt/hadoop/etc.

              

              I tried the following:

              java -jar hdfs-fs.jar -ls /

              java -jar hdfs-fs.jar --config /opt/hbase/etc -ls /

              java -jar hdfs-fs.jar -conf /opt/hbase/etc -ls /

              

              This is the error I am getting:

              log4j:WARN No appenders could be found for logger
              (org.apache.hadoop.util.Shell).

              log4j:WARN Please initialize the log4j system properly.

              log4j:WARN See 
http://logging.apache.org/log4j/1.2/faq.html#noconfig
              for more info.

              Exception in thread "main" java.lang.RuntimeException:
              core-site.xml not found

                      at
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2566)

                      at
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2492)

                      at
              
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2405)

                      at
              org.apache.hadoop.conf.Configuration.set(Configuration.java:1143)

                      at
              org.apache.hadoop.conf.Configuration.set(Configuration.java:1115)

                      at
              
org.apache.hadoop.conf.Configuration.setBoolean(Configuration.java:1451)

                      at
org.apache.hadoop.util.GenericOptionsParser.processGeneralOptions(GenericOptionsParser.java:321)

                      at
org.apache.hadoop.util.GenericOptionsParser.parseGeneralOptions(GenericOptionsParser.java:487)

                      at
org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:170)

                      at
org.apache.hadoop.util.GenericOptionsParser.<init>(GenericOptionsParser.java:153)

                      at
              org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:64)

                      at
              org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)

                      at
              org.apache.hadoop.fs.FsShell.main(FsShell.java:340)

              

              How do I point the jar to use the hdfs-site.xml and
              core-site.xml located in /opt/hadoop/etc?

              

              Cheers,

              Francis

              

              On 29/08/2016 5:18 PM, 권병창 wrote:

            
            
              
                Hi

                 

                refer to below pom.xml

                you should modify hadoop version that you want.

                 

                build
                    is simple:  mvn clean package 

                It
                    will make a jar  of 34mb size. 

                 

                usage is simple:

                java -jar
                  ${build_jar}.jar -mkdir /user/home

                java
                    -jar ${build_jar}.jar -ls /user/home

                
                 

                 

                pom.xml

                <?xml version="1.0" encoding="UTF-8"?> 

                <project xmlns="http://maven.apache.org/POM/4.0.0"; 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance";
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd">

  <modelVersion>4.0.0</modelVersion>

  <groupId>com.naver.c3</groupId>
  <artifactId>hdfs-connector</artifactId>
  <version>1.0-SNAPSHOT</version>

  <dependencies>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-common</artifactId>
      <version>2.7.1</version>
    </dependency>
    <dependency>
      <groupId>org.apache.hadoop</groupId>
      <artifactId>hadoop-hdfs</artifactId>
      <version>2.7.1</version>
    </dependency>
  </dependencies>

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>2.3</version>
        <executions>
          <execution>
            <phase>package</phase>
            <goals>
              <goal>shade</goal>
            </goals>
            <configuration>
              <minimizeJar>false</minimizeJar>
              <transformers>
                <transformer 
implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                  
<mainClass>org.apache.hadoop.fs.FsShell</mainClass>
                </transformer>
                <transformer 
implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
              </transformers>
           </configuration>
          </execution>
        </executions>
      </plugin>
    </plugins>
  </build>

</project>
                 

                 

                -----Original
                    Message-----

                  From: "F21"<[email protected]>
                  

                  To: <[email protected]>;
                  

                  Cc: 

                  Sent: 2016-08-29 (월) 14:25:09

                  Subject: Installing just the HDFS client

                   

                Hi all,

                

                I am currently building a HBase docker image. As part of
                the bootstrap 

                process, I need to run some `hdfs dfs` commands to
                create directories on 

                HDFS.

                

                The whole hadoop distribution is pretty heavy contains
                things to run 

                namenodes, etc. I just need a copy of the dfs client for
                my docker 

                image. I have done some poking around and see that I
                need to include the 

                files in bin/, libexec/, lib/ and share/hadoop/common
                and share/hadoop/hdfs.

                

                However, including the above still takes up quite a bit
                of space. Is 

                there a single JAR I can add to my image to perform
                operations against HDFS?

                

                

                Cheers,

                

                Francis

                

                

---------------------------------------------------------------------

                To unsubscribe, e-mail: [email protected]

                For additional commands, e-mail: [email protected]

                

                

                

              
              
                
              
            
            

            

          
      
      
        
      
    
    

    

  



Reply via email to