I want to have a scheduler in my cluster that would send some messages after some time. From what I see scheduler is per actorsystem, from my tests only for local actor system. Not the cluster one. So If schedule something on one node, if it get's down then all scheduled tasks are discarded.
If I create a Cluster Singleton which would be responsible for scheduling, could the already made schedules survive recreation on some other node? Or should I keep it as a persistent actor with structure of already created schedules metadata and in preStart phase reschedule everything that was persisted?
A cluster singleton will reincarnate on another node if the node it was previously on is downed or leaves the cluster.
That reincarnation will start with a clean slate: it won't remember its "past lives".
However, if it's a persistent actor (or, equivalently, its behavior is an EventSourcedBehavior
in Akka Typed), it will on startup recover its state from the event stream (and/or snapshots). For a persistent actor, this typically doesn't require anything to be done preStart
: the persistence implementation will take care of replaying the events.
Depending on how many tasks are scheduled and if you want the schedule to be discarded on a full cluster restart, it may be possible to use Akka Distributed Data to have the schedule metadata distributed around the cluster (with tuneable consistency) and then have a cluster singleton scheduling actor read that metadata.