We have a pipeline set up to move data from an oracle database into google cloud's bigquery in real time. The pipeline is a very simple Extract -> Receiver -> Replicat, and it works flawlessly with all tables except for one.
This one specific table throws a mapping error:
OGG-01296 Oracle GoldenGate Delivery, Replicat.prm: Error mapping from REPOSITORY.ORACLE_TABLE to DATAWAREHOUSE.BIGQUERY_TABLE
Here is the catch: the schemas of both tables are compatible according to the logs:
Table [...] in BiqQuery has a compatible schema with the source metadata
I manually verified that all fields are present with the correct type, and I didn't find any problems. Furthermore, we previously did an initial load from the same source into a target with an identical schema (except for the autogenerated goldengate metadata fields) and it worked without a hitch.
It doesn't seem to be a specific broken record either (though I'm not entirely discarding that possibility), given that the replicat always writes two or three rows on that table before abending.
So we are stumped. What an possibly cause Goldengate to throw a mapping error when the schemas are compatible?
As @DamiãoMartins suggested, enabling table level supplemental logging
in the source oracle table solved this issue.
If you are experiencing a similar, arbitrary mapping error in a single table make sure to check that.