I have currently an working Azure TSI environment in use.
At the moment environment GET request for both store types
https://XXX.env.timeseries.azure.com/availability?api-version=2020-07-31&storeType=ColdStore
https://XXX.env.timeseries.azure.com/availability?api-version=2020-07-31&storeType=WarmStore
have started to return DateTime.MinValue
in their availability range.from
value. The response below is observed in the Time Series Insights user interface and by the Chrome development tools network tab:
{
"availability":{
"intervalSize":"P3600D",
"distribution":{
"0001-01-01T00:00:00Z":371427749,
"2020-04-15T00:00:00Z":1499591,
...
"2011-09-21T00:00:00Z":137643193
},
"range":{
"from":"0001-01-01T00:00:00Z",
"to":"2021-07-03T07:05:49.182Z"
}
},
"retention":"P7D"
}
Is this a bug? I can easily work around the issues by selecting the oldest valid value on distribution. However, I am wondering what the distribution response with DateTime.MinValue
tries to express?
Link to the Microsoft Time Series Insights documentation
EDIT:
This seems to be an effect of me sending data into the TSI with an incorrect timestamp. Where timestamp is equivalent to DateTime.MinValue
. Therefore the response of TSI is correct. However, it seems that in this particular case the warmstorage
response of TSI:
availability?api-version=2020-07-31&storeType=WarmStore
{
"availability": {
"intervalSize": "P3600D",
"distribution": {
"2011-09-21T00:00:00Z": 132976370,
"0001-01-01T00:00:00Z": 371393382
},
"range": {
"from": "0001-01-01T00:00:00Z",
"to": "2021-07-05T14:36:16.439Z"
}
},
"retention": "P7D"
}
does not give me enough data to determine the correct warmstorage
range?
The ingested event that has timestamp of '0001-01-01T00:00:00Z' is skewing the results. The availability above provides the range based on the timestamps of the events ingested. We don't have a way to filter events via the availability API.
You will need to wait until those events get dropped from warm store due to retention. TSI Gen2 warm store retention period is 31 days.