javahadoopdistributed-filesystem

Distributedfilesystem class use local instead of distributed classes


I have this line in my code:

DistributedFileSystem.get(conf).delete(new Path(new URI(otherArgs[1])), true);    

otherArgs[1] has this value: hdfs://master:54310/input/results

I receive this exception:

Exception in thread "main" java.lang.IllegalArgumentException: Wrong FS:hdfs://master:54310/input/results, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:354)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:55)
at org.apache.hadoop.fs.RawLocalFileSystem.getFileStatus(RawLocalFileSystem.java:367)
at org.apache.hadoop.fs.ChecksumFileSystem.delete(ChecksumFileSystem.java:430)
at <package>.<classname>.main(Degree.java:137)    

Note: I tried to use new Path(otherArgs[1]) without URI but got the exact same error !

Thanks, -K


Solution

  • It turns out that I was running my jar using "hadoop -jar " instead of "hadoop jar ". All conf files are correct and in place.

    Problem solved but i still have no idea why using "-jar" made it run as a local (pseudo distributed) !