We use Logstash to receive logs, pass to Elasticsearch, and browse using Kibana. A very common setup.
One of the fields in each entry is @timestamp, with an example content of 03/18/2015 18:02:52
. What filter should I use to display only the first entry of each day?
I don't believe you can do this with a filter - being first in a day isn't a property you can ascertain by looking at a single document. You should however be able to do this with an aggregation: first aggregate using a date_histogram
with interval day, to group the events by day. Then use the top_hits
aggregation to pull out one result per day (requires elasticsearch 1.3 or higher). Your query should look like this
{
"query": {
"match_all": {}
},
"aggs": {
"by-day": {
"date_histogram": {
"field": "timestamp",
"interval": "day"
},
"aggs": {
"top_for_day": {
"top_hits": {
"size": 1,
"sort": [
{
"timestamp": {
"order": "asc"
}
}
]
}
}
}
}
}
}
Which should produce results like (trimmed slightly for brevity)
{
"aggregations": {
"by-day": {
"buckets": [
{
"key_as_string": "2015-02-01T00:00:00.000Z",
"key": 1422748800000,
"doc_count": 7635,
"top_for_day": {
"hits": {
"total": 7635,
"max_score": null,
"hits": [
{
"_index": "events-2015-02",
"_type": "event",
"_id": "c64f85ac-a870-441f-bedb-e24db47fd02a",
"_score": null,
"_source": {
"eventTime": "2015-02-01T00:00:26Z"
},
"sort": [
1422748826000
]
}
]
}
}
},
{
"key_as_string": "2015-02-02T00:00:00.000Z",
"key": 1422835200000,
"doc_count": 8182,
"top_for_day": {
"hits": {
"total": 8182,
"max_score": null,
"hits": [
{
"_index": "events-2015-02",
"_type": "event",
"_id": "c544278d-9f51-41a8-827b-9c70c0a057ca",
"_score": null,
"_source": {
"timestamp": "2015-02-02T00:00:19Z"
},
"sort": [
1422835219000
]
}
]
}
}
}
]
}
}
}