This has been really confusing for me and I couldn't find an answer even on AWS support. Docs suggest that ConsumedWriteCapacityUnits is emitted at a 1 minute granularity.
I want to understand if that value emitted is the sum of all WCUs consumed over the minute and I need to do a divide by 60 to get the actual WCU/second, or is it already an averaged value (WCU/sec) sent out every minute and I can just use that.
I need this as I am setting a monitor for high WCU usage and my current value for WCU per partition is 1000 which is the maximum WCU/second supported by DynamoDB per partition currently.
DynamoDB meters consumed capacity at the per second level, but when publishing to CloudWatch it aggregates those per second level every 60 seconds, giving you the 1 minute data point.
So when looking at CloudWatch metrics, you need to look at Consumed-X-Capacity and divide by PERIOD
which would be 60 if looking at 1 min granularity.
In the DynamoDB console, this metric math is already included in your graphs.