guys.
I am using Hortonwork's HDP Sandbox. I've configured the simple spark job (which takes text file and outputs into another file the word counts).
Anyway, the problems I have is with the coordinator (in oozie), when I schedule the job to always repeat after 5 minutes. Each time the coodrinator creates 12 same workflows and I don't know why. Here is my coordinator configuration:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<coordinator-app xmlns="uri:oozie:coordinator:0.5" end="2020-01-02T17:53Z" frequency="${coord:minutes(5)}" name="Simple Spark Scala Coordinator" start="2019-05-01T17:53Z" timezone="GMT+04:00">
<action>
<workflow>
<app-path>/user/admin/tmp/workflow.xml</app-path>
</workflow>
</action>
</coordinator-app>
and here is the workflow configuration:
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<workflow-app xmlns="uri:oozie:workflow:0.5" name="Spark Scala Example Workflow">
<start to="spark_scala_example"/>
<action name="spark_scala_example">
<spark xmlns="uri:oozie:spark-action:0.2">
<job-tracker>${resourceManager}</job-tracker>
<name-node>${nameNode}</name-node>
<prepare>
<delete path="${nameNode}/user/${wf:user()}/tmp/result_large"/>
</prepare>
<master>local</master>
<name>Spark Scala Example Action</name>
<class>com.example.App</class>
<jar>${nameNode}/user/${wf:user()}/tmp/spark-scala-example-1.0-SNAPSHOT.jar</jar>
<arg>${nameNode}/user/${wf:user()}/tmp/test_large.txt</arg>
<arg>${nameNode}/user/${wf:user()}/tmp/result_large</arg>
</spark>
<ok to="end"/>
<error to="kill"/>
</action>
<kill name="kill">
<message>${wf:errorMessage(wf:lastErrorNode())}</message>
</kill>
<end name="end"/>
</workflow-app>
Do you guys have any idea why it creates 12 copies of the same workflow?
Did you check if the date in all those 12 workflows is the same?
If you choose to set the start date in the past, Oozie will compensate for the missing workflows. Not sure why only 12