Here is a simplified version of an API, that I am developing. Different users can create, update and delete entities of some kind. And I need to periodically prepare or update one ZIP file per customer, containing the latest versions of their entities.
My idea is to store the entities in a DynamoDB table and then periodically run a batch process, which would read the changes from the table's stream. My question is how do I make sure that each subsequent batch read would continue from the correct place? That is, from the first unread event.
A bit more info:
This is kind of a follow-up question to this answer: https://stackoverflow.com/a/44010290/106350.
With streams you have an iterator position per shard, in which you can use as a pointer to the last read position. You can read more on this here: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html#Streams.Processing
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_GetShardIterator.html