I have a recursive function defined sort of like this:
void routine(int* old_arr, int k, int depth) {
if (depth == 15) return;
int* new_arr = new int[size];
// some operation that builds new_arr values out of old_arr values
delete [] old_arr; // if i dont need old_arr anymore, can I just do this?
for (int i = 0; i < 3; i++) {
routine(new_arr, k);
}
return;
}
If you'll notice, on the 2nd call of the for loop there will be a double free.
See this as a depth first search along a ternary tree. the function argument old_arr is used to fill values of a new array called new_arr. Notice old_arr is only needed to compute the values at the next depth. If I don't have any sort of free routine, I will have a lot of this pointers floating around consuming memory and not doing anything. What is a good design around this?
I've considered using something like a shared_ptr<int>
, and replacing the delete[]
with
shared_ptr<int> new_arr(new int[size]);
// operators to initialize new_arr with old_arr
if (old_arr.use_count() == 1) old_arr.reset()
Is this the right idea? Edit: actually that won't work either.
In a nutshell, yes, it sounds like you need shared_ptr
. But you have some strangeness in your recursion -- the caller needs new_arr
to stay alive, because it calls routine()
three times, so it shouldn't be up to routine()
to free it, ever. Let the caller free it when needed. It can't work any other way.
Use make_shared
instead of new
, it prevents mistakes with new
and ensures one heap allocation instead of two (one for the array and one for the shared pointer control block). Also, never use use_count()
except for debugging. Here's some amended code:
void routine(shared_ptr<int[]> const& old_arr, int k, int depth)
{
if (depth == 15) return;
auto new_arr = make_shared<int[]>(size);
// some operation that builds new_arr values out of old_arr values
for (int i = 0; i < 3; i++) {
routine(new_arr, k);
}
}