apache-sparkhadoop-yarnhortonworks-data-platformresource-management

Spark on YARN too less vcores used


I'm using Spark in a YARN cluster (HDP 2.4) with the following settings:

When I run my spark application with the command spark-submit --num-executors 30 --executor-cores 3 --executor-memory 7g --driver-cores 1 --driver-memory 1800m ... YARN creates 31 containers (one for each executor process + one driver process) with the following settings:

enter image description here

My question here: Why does the spark-submit parameter --executor-cores 3 have no effect?


Solution

  • Ok, seems to be the same issue as discussed here: yarn is not honouring yarn.nodemanager.resource.cpu-vcores The solution also worked for me.