hadoophdfsapache-spark

Spark iterate HDFS directory


I have a directory of directories on HDFS, and I want to iterate over the directories. Is there any easy way to do this with Spark using the SparkContext object?


Solution

  • You can use org.apache.hadoop.fs.FileSystem. Specifically, FileSystem.listFiles([path], true)

    And with Spark...

    FileSystem.get(sc.hadoopConfiguration).listFiles(..., true)
    

    Edit

    It's worth noting that good practice is to get the FileSystem that is associated with the Path's scheme.

    path.getFileSystem(sc.hadoopConfiguration).listFiles(path, true)