I have HDFS cluster with Active and Stanby Namenodes. Sometimes when cluster gets restarted, Namenodes exchange their roles - Standby becomes Active, and vice versa.
Then I have NiFi flow with PutParquet processor writing some files to this HDFS cluster. Processor is configured with directory property as "hdfs://${namenode}/some/path", where ${namenode} variable value is like "first.namenode.host.com:8020".
Now, when cluster gets restarted and actual Namenode gets changed to "second.namenode.host.com:8020", configuration in NiFi is not updated and processor still tries to use old namenode address, and thus some exception is thrown (I don't remember actual error text, but I think it doesn't matter for my question).
And now the question is: how can I track this event in NiFi, to automatically update PutParqet processor configuration when HDFS configuration changed?
NiFi version is 1.6.0, HDFS version is 2.6.0-cdh5.8.3
I haven't confirmed this, but I thought with HA HDFS (Active and Standby NNs), you'd have the HA properties set in your *-site.xml files (probably core-site.xml) and would refer to the "cluster name" which the Hadoop client will then resolve into a list of NameNodes, which it would then try to connect to. If that's the case, then try the cluster name (see the core-site.xml file on the cluster) rather than a hardcoded NN address.