performancegomemorygarbage-collectionheap-memory

large number of heap allocations in database query results


I am implementing the Rows interface in sql/driver. When implementing the Next(dest []Value) error method, I found that when the result of data is very large, heap allocation will become a performance bottleneck. I did a test to illustrate this.

    b := binary.LittleEndian.AppendUint32(nil, 0)
    b = binary.LittleEndian.AppendUint16(b, 5)
    b = append(b, "hello"...)
    t := time.Now()
    var v driver.Value
    for i := 0; i < 100000000; i++ {
        v = b[5:11]
    }
    _ = v
    fmt.Println(time.Since(t))

Print execution time 4.154244695s. cpu pprof The pprof results show that too much time is spent in runtime.mallocgc because v = b[5:11] needs to allocate a slice header in the heap.

I want to know if there is a way to reduce the performance bottleneck caused by heap allocation. Or is there any other way to bypass it?


Solution

  • The value of b[5:11] is a slice, which is represented by a pointer and two integers. The slicing operation does not need to allocate memory. Interfaces allocate a copy of the value on the heap when it doesn't fit into a single word, which the slice doesn't.

    So

        s := b[5:11]
    

    is cheap but

        var v any = s
    

    needs to allocate memory. I'm afraid there is no way around this besides depending on Go internals.