hadoopapache-sparkamazon-ec2emr

Using spark-submit externally from EMR cluster master


We have a Hadoop cluster running in AWS Elastic MapReduce (EMR) with Spark 1.6.1. No problem slogining into cluster master and submitting Spark jobs, but we'd like to be able to submit them from another independent EC2 instance.

The other 'external' EC2 instance has security groups setup to allow all TCP traffic to and from the EMR instance master & slave instances. It has a binary installation of Spark downloaded directly from Apache's site.

Having copied the /etc/hadoop/conf folder from the master to this instance and set $HADOOP_CONF_DIR accordingly, when attempt to submit the SparkPi example, I run into the following permission issue:

$ /usr/local/spark/bin/spark-submit --master yarn --deploy-mode client --class org.apache.spark.examples.SparkPi /usr/local/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar 
16/06/22 13:58:52 INFO spark.SparkContext: Running Spark version 1.6.1
16/06/22 13:58:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/06/22 13:58:52 INFO spark.SecurityManager: Changing view acls to: jungd
16/06/22 13:58:52 INFO spark.SecurityManager: Changing modify acls to: jungd
16/06/22 13:58:52 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions:     Set(jungd); users with modify permissions: Set(jungd)
16/06/22 13:58:52 INFO util.Utils: Successfully started service 'sparkDriver' on port 34757.
16/06/22 13:58:52 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/06/22 13:58:52 INFO Remoting: Starting remoting
16/06/22 13:58:53 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@172.31.61.189:39241]
16/06/22 13:58:53 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 39241.
16/06/22 13:58:53 INFO spark.SparkEnv: Registering MapOutputTracker
16/06/22 13:58:53 INFO spark.SparkEnv: Registering BlockManagerMaster
16/06/22 13:58:53 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-300d738e-d7e4-4ae9-9cfe-4e257a05d456
16/06/22 13:58:53 INFO storage.MemoryStore: MemoryStore started with capacity 511.1 MB
16/06/22 13:58:53 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/06/22 13:58:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/22 13:58:53 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/06/22 13:58:53 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/06/22 13:58:53 INFO ui.SparkUI: Started SparkUI at http://172.31.61.189:4040
16/06/22 13:58:53 INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-5e332986-ae2a-4bde-9ae4-edb4fac5e1d7/httpd-e475fd1b-c5c8-4f31-9699-be89fff4a69c
16/06/22 13:58:53 INFO spark.HttpServer: Starting HTTP Server
16/06/22 13:58:53 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/06/22 13:58:53 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:43525
16/06/22 13:58:53 INFO util.Utils: Successfully started service 'HTTP file server' on port 43525.
16/06/22 13:58:53 INFO spark.SparkContext: Added JAR file:/usr/local/spark/lib/spark-examples-1.6.1-hadoop2.6.0.jar at http://172.31.61.189:43525/jars/spark-examples-1.6.1-hadoop2.6.0.jar with timestamp 1466603933454
16/06/22 13:58:53 INFO client.RMProxy: Connecting to ResourceManager at ip-172-31-60-166.ec2.internal/172.31.60.166:8032
16/06/22 13:58:53 INFO yarn.Client: Requesting a new application from cluster with 2 NodeManagers
16/06/22 13:58:53 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (11520 MB per container)
16/06/22 13:58:53 INFO yarn.Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
16/06/22 13:58:53 INFO yarn.Client: Setting up container launch context for our AM
16/06/22 13:58:53 INFO yarn.Client: Setting up the launch environment for our AM container
16/06/22 13:58:53 INFO yarn.Client: Preparing resources for our AM container
16/06/22 13:58:54 ERROR spark.SparkContext: Error initializing SparkContext.
org.apache.hadoop.security.AccessControlException: Permission denied: user=jungd, access=WRITE, inode="/user/jungd/.sparkStaging/application_1466437015320_0014":hdfs:hadoop:drwxr-xr-x
at         org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:319)
at     org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:292)
at     org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:213)

It makes no difference if submitting using cluster deploy-mode. The user in question is a local user on the 'external' EC2 instance (we have multiple developer accounts) that does not exist on the master or slaves of the cluster (and even locally, the users home directories are in /home, not /user).

I'm at a loss to figure out what is going on. Any help greatly appreciated.


Solution

  • A couple things are required to run spark-submit from an machine other than the master:

    It may also be necessary to create users as Linux accounts on the master.