I've written a Cython function that takes a list/ typed memoryview of numbers as the argument and returns a Typed Memoryview of the same length:
def test(list_data):
cdef unsigned int n = len(list_data)
cdef unsigned int i = 0
cdef double *results_arr = <double*>malloc(n* sizeof(double) )
cdef double[:] results = <double[:n]>results_arr
for i in range(n):
results[i] = 220 - list_data[i]
return results
After running a few thousand tests on it, I started getting a Segmentation fault (core dumped)
error. I realize that this is a memory management issue, but I cannot find an example of how to manage memory of a typed memoryview returned by a function. The only helpful information I have found is on memory allocation which recommends tying the lifetime of the result_arr
to a python object and using a __dealloc__
method to free the memory.
Is there a way to manage memoryview garbage collection that doesn't involve creating python classes for deallocating the memory?
Edit: I tried this and it seems to be freeing up memory in the correct manner.
def test(list_data):
cdef unsigned int n = len(list_data)
cdef unsigned int i = 0
cdef double *arr = <double*>malloc(n* sizeof(double) )
if not arr:
raise MemoryError()
cdef double[:] results = <double[:n]>arr
for i in range(n):
results[i] = 220 - list_data[i]
free(arr)
return results
Why does this work and is there a better method for managing memory?
A typed memoryview is as the name implies a view to a memorybuffer. It does not own that piece of memory, but provides an efficient way of accessing it. The part of the cython documentation you are referring to is a way of tying the underlying heap allocated c-array to the python garbage collector. If you want to use c memory allocation like you are doing here you have to take responsibility for it as well. This is because you are now working on the c-level, and c does nothing for you for free. Your function makes a view of the allocated memory, but discards the pointer that is referencing it. Now this memory sits around with nothing taking responsibility for freeing it.
If you dont want to get into the world of c, i recommend that you read your data onto a numpy array in python and pass this array to the cython function instead. Python and numpy is very well suited to that sort of thing.
But if you want to use malloc an alternative could be to wrap it in a extension type.
cdef class mymemory:
cdef:
double *arr
double[::1] results
def __cinit__(self, int n):
self.arr = <double*>malloc(n*sizeof(double))
def __init__(self, int n):
self.results = <double[:n]> arr
"""
Some code for filling in the results.
"""
def __dealloc(self):
if self.arr != NULL:
free(self.arr)
Now, when the mymemory is garbage collected the underlying c array is freed with it. It is an alternative since you asked for it, but I still recommend numpy over this.
On your second function you seem to allocate memory, create a view to it and then freeing it again. Now the memory this memoryview is viewing no longer exists. So you are correct, the memory is freed correctly. But now the memoryview is no longer of any use for you.