I'm trying to understand the behaviour of (py)Spark checkpointing. Let's say there is some source data (s3 or HDFS) A
with intermediate RDDs checkpoints B
and C
:
If the source data changes (e.g. new data is added), will B
and C
be recalculated?
If not, is the standard approach to set the checkpoint directory to be named according to e.g. timestamps or something else?
I did consider Spark checkpointing behaviour however the answers only covered code changes.
No. Checkpoints B and C are snapshots, so as to avoid recomputation from the source A, in case of failure during Spark App.
Conversely, changes to source are irrelevant, not recognized, if there is no failure, whether or not checkpointing is applied. If there is failure and there in no checkpointing, then depending on type of source, newer changed data may be read, but that is handled neatly per Spark Stage.