I am evaluating Apache Ignite data grid for the production usage.
One of critical requirements is to have a well defined stragegy for upgrading a large system to a binary incompatible version (usually unavoidable when using binary protocols like Ignite does). More specifically, upgrading the Ignite infrastructure independently (before or after) from the large number of Ignite client node components and/or Ignite thin clients.
So wondering what would such process look like, for situation when upgrading all the components of the system as a big-bang is not practically possible.
If your primary objective is clients which should access cluster without downtime during upgrade, I can recommend that most of those clients should be 'thin' clients, such as JDBC client, ODBC client, REST or Java/C#/C++/node.js thin clients which are under active development currently. They have no strict version checking.
So you should avoid using 'thick' clients (a.k.a. Apache Ignite client nodes) and only for operations which can't be performed by thin clients. Or use Rolling Upgrades as mentioned.