We are running version datastax java driver 1.9. And our config is like this (this works):
cassandra.contactpoints=tfi-db-ddac-001.tfi.myCompany.net,tfi-db-ddac-002.tfi.myCompany.net
cassandra.username=username
cassandra.password=password
cassandra.keyspace.create=CREATE KEYSPACE myKeySpace WITH replication = {'class' : 'NetworkTopologyStrategy', 'DC1' : 1, 'DC2' : 1};
cassandra.keyspace_log.create=CREATE KEYSPACE myKeySpace_log WITH replication = {'class' : 'NetworkTopologyStrategy', 'DC1' : 1, 'DC2' : 1};
cassandra.log_entries.write_consistency_level=TWO
cassandra.metric_monitor.write_consistency_level=TWO
cassandra.app_tracking.write_consistency_level=TWO
With this and update the dependencies to 4.x:
try (CqlSession session = CqlSession.builder()
.addContactPoint(new InetSocketAddress("tfi-db-ddac-001.tfi.myCompany.net", 9042))
.addContactPoint(new InetSocketAddress("tfi-db-ddac-002.tfi.myCompany.net", 9042))
.withLocalDatacenter("dc1")
.withAuthCredentials("username","password")
.build()) {
I get always this error:
You specified dc1 as the local DC, but some contact points are from a different DC: Node(endPoint=tfi-db-ddac-002.tfi.myCompany.net/10.8.64.97:9042, hostId=43e3df16-1e44-4aff-b0ac-2fee0e17ace5, hashCode=2043258b)=ggi-l, Node(endPoint=tfi-db-ddac-001.tfi.myCompany.net/10.8.64.95:9042, hostId=4d5b9290-8c92-4f1a-b348-51d42d439e2b, hashCode=7f1da1b2)=ggi-s; please provide the correct local DC, or check your contact points
Can someone gives me please an advice/example how to migrate this to the 4.x Cassandra.
Thanks in advance!
Version 4 of the Cassandra Java driver was refactored from the ground up so it is not binary-compatible with older versions including the DSE releases of the driver.
One of the biggest differences compared to older versions of the Java driver is that the built-in load-balancing policies (DefaultLoadBalancingPolicy
and DcInferringLoadBalancingPolicy
) will only ever connect to just ONE datacenter. For this reason, the driver will only accept contact points which belong to the local DC configured (if using the default load-balancing policy).
During the initialisation phase, the driver explicitly checks the DCs of the configured contact points (see OptionalLocalDcHelper.checkLocalDatacenterCompatibility()
). When the driver detects a "bad" contact point(s), it will log a WARN
message with instructions to either (a) provide the correct local DC, or (b) check the list of contact points.
It doesn't matter whether none of the cluster DCs are "local" to your application instances -- the driver will still alert you when it detects this unrecommended configuration. Since it is logged as a warning (WARN
), your application will still work but the driver will never include remote nodes in the query plan.
For more information, see Load balancing with the Java driver. Cheers!