javaspringmongodbspring-data-mongodbmongotemplate

optimize mongo query to get max date in a very short time


I'm using the query bellow to get max date (field named extractionDate) in a collection called KPI, and since I'm only interested in the field extractionDate:

@Override
public Mono<DBObject> getLastExtractionDate(MatchOperation matchOperation,ProjectionOperation projectionOperation) {
    return Mono.from(mongoTemplate.aggregate(
            newAggregation(
                    matchOperation,
                    projectionOperation,
                    group().max(EXTRACTION_DATE).as("result"),
                    project().andExclude("_id")
            ),
            "kpi",
            DBObject.class
    ));
}

And as you see above, I need to filter the result firstly using the match operation (matchOperation) after that, I'm doing a projection operation to extract only the max of field "extractionDate" and rename it as result.

But this query cost a lot of time (sometimes more than 20 seconds) because I have a huge amount of data, I already added an index on the field extractionDate but I did not gain a lot, so I'm looking for a way to mast it fast as max as possible.

update:

Number of documents we have in the collection kpi: 42.8m documents

The query that being executed:

Streaming aggregation: [{ "$match" : { "type" : { "$in" : ["INACTIVE_SITE", "DEVICE_NOT_BILLED", "NOT_REPLYING_POLLING", "MISSING_KEY_TECH_INFO", "MISSING_SITE", "ACTIVE_CIRCUITS_INACTIVE_RESOURCES", "INCONSISTENT_STATUS_VALUES"]}}}, { "$project" : { "extractionDate" : 1, "_id" : 0}}, { "$group" : { "_id" : null, "result" : { "$max" : "$extractionDate"}}}, { "$project" : { "_id" : 0}}] in collection kpi

explain plan: enter image description here

Example of a document in the collection KPI: enter image description here

And finally the indexes that already exist on this collection : enter image description here


Solution

  • Index tuning will depend more on the properties in the $match expression. You should be able to run the query in mongosh with and get an explain plan to determine if your query is scanning the collection.

    Other things to consider is the size of the collection versus the working set of the server.

    Perhaps update your question with the $match expression, and the explain plan and perhaps the current set of index definitions and we can refine the indexing strategy.

    Finally, "huge" is rather subjective? Are you querying millions or billions or documents, and what is the average document size?

    Update:

    Given that you're filtering on only one field, and aggregating on one field, you'll find the best result will be an index

    { "type":1,"extractionDate":1}
    

    That index should cover your query -- because the $in will mean that a scan will be selected but a scan over a small index is significantly better than over the whole collection of documents.

    NB. The existing index extractionDate_1_customer.irType_1 will not be any help for this query.