apache-sparkcontainershadoop-yarnhortonworks-data-platformexecutor

Spark on YARN resource manager: Relation between YARN Containers and Spark Executors


I'm new to Spark on YARN and don't understand the relation between the YARN Containers and the Spark Executors. I tried out the following configuration, based on the results of the yarn-utils.py script, that can be used to find optimal cluster configuration.

The Hadoop cluster (HDP 2.4) I'm working on:

So I ran python yarn-utils.py -c 12 -m 64 -d 4 -k True (c=cores, m=memory, d=hdds, k=hbase-installed) and got the following result:

 Using cores=12 memory=64GB disks=4 hbase=True
 Profile: cores=12 memory=49152MB reserved=16GB usableMem=48GB disks=4
 Num Container=8
 Container Ram=6144MB
 Used Ram=48GB
 Unused Ram=16GB
 yarn.scheduler.minimum-allocation-mb=6144
 yarn.scheduler.maximum-allocation-mb=49152
 yarn.nodemanager.resource.memory-mb=49152
 mapreduce.map.memory.mb=6144
 mapreduce.map.java.opts=-Xmx4915m
 mapreduce.reduce.memory.mb=6144
 mapreduce.reduce.java.opts=-Xmx4915m
 yarn.app.mapreduce.am.resource.mb=6144
 yarn.app.mapreduce.am.command-opts=-Xmx4915m
 mapreduce.task.io.sort.mb=2457

These settings I made via the Ambari interface and restarted the cluster. The values also match roughly what I calculated manually before.

I have now problems

However, I found this post What is a container in YARN? , but this didn't really help as it doesn't describe the relation to the executors.

Can someone help to solve one or more of the questions?


Solution

  • I will report my insights here step by step:

    => So setting yarn.scheduler.minimum-allocation-mb allows you to run smaller containers e.g. for smaller executors (else it would be waste of memory).

    => Setting yarn.scheduler.maximum-allocation-mb to the maximum value (e.g. equal to yarn.nodemanager.resource.memory-mb) allows you to define bigger executors (more memory is allocated if needed, e.g. by --executor-memory parameter).