I'm using Tensorflow 2.1 git master branch (commit id:db8a74a737cc735bb2a4800731d21f2de6d04961) and compile it locally. Playing around with the C API to call TF_LoadSessionFromSavedModel
but seems to get segmentation fault. I've managed to drill down the error in the sample code below.
TF_NewTensor
call is crashing and causing a segmentation fault.
int main()
{
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);
int ndims = 1;
int64_t* dims = malloc(sizeof(int64_t));
int ndata = sizeof(int32_t);
int32_t* data = malloc(sizeof(int32_t));
dims[0] = 1;
data[0] = 10;
// Crash on the next line
TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL);
if(int_tensor == NULL)
{
printf("ERROR");
}
else
{
printf("OK");
}
return 0;
}
However, when i move the TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);
after the TF_NewTensor
call, it doesn't crash. Like below:
int main()
{
int ndims = 1;
int64_t* dims = malloc(sizeof(int64_t));
int ndata = sizeof(int32_t);
int32_t* data = malloc(sizeof(int32_t));
dims[0] = 1;
data[0] = 10;
// NO more crash
TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL);
if(int_tensor == NULL)
{
printf("ERROR");
}
else
{
printf("OK");
}
TF_Tensor** InputValues = (TF_Tensor**)malloc(sizeof(TF_Tensor*)*1);
return 0;
}
Is it a possible bug or I'm using it wrong? I don't understand how malloc
q an independent variable could cause a segmentation fault.
can anybody reproduce?
I'm using gcc (Ubuntu 9.2.1-9ubuntu2) 9.2.1 20191008 to compile.
UPDATE:
can be further simplified the error as below. This is even without the InputValues
being allocated.
#include <stdlib.h>
#include <stdio.h>
#include "tensorflow/c/c_api.h"
int main()
{
int ndims = 1;
int ndata = 1;
int64_t dims[] = { 1 };
int32_t data[] = { 10 };
TF_Tensor* int_tensor = TF_NewTensor(TF_INT32, dims, ndims, data, ndata, NULL, NULL);
if(int_tensor == NULL)
{
printf("ERROR Tensor");
}
else
{
printf("OK");
}
return 0;
}
compile with
gcc -I<tensorflow_path>/include/ -L<tensorflow_path>/lib test.c -ltensorflow -o test2.out
Solution
As point up by Raz, pass empty deallocater
instead of NULL, and ndata
should be size in terms of byte.
#include "tensorflow/c/c_api.h"
void NoOpDeallocator(void* data, size_t a, void* b) {}
int main(){
int ndims = 2;
int64_t dims[] = {1,1};
int64_t data[] = {20};
int ndata = sizeof(int64_t); // This is tricky, it number of bytes not number of element
TF_Tensor* int_tensor = TF_NewTensor(TF_INT64, dims, ndims, data, ndata, &NoOpDeallocator, 0);
if (int_tensor != NULL)\
printf("TF_NewTensor is OK\n");
else
printf("ERROR: Failed TF_NewTensor\n");
}
checkout my Github on full code of running/compile TensorFlow's C API here
You set ndata
to be sizeof(int32_t)
which is 4.
Your ndata
is passed as len
argument to TF_NewTensor()
which represents the number of elements in data
(can be seen in GitHub). Therefore, it should be set to 1 in your example, as you have a single element.
By the way, you can avoid using malloc()
here (as you don't check for return values, and this may be error-pront and less elegant in general) and just use local variables instead.
UPDATE
In addition, you pass NULL
both for deallocator
and deallocator_arg
. I'm pretty sure this is the issue as the comment states "Clients must provide a custom deallocator function..." (can be seen here). The deallocator
is called by the TF_NewTensor()
(can be seen here) and this may be the cause for the segmentation fault.
So, summing it all up, try the next code:
void my_deallocator(void * data, size_t len, void * arg)
{
printf("Deallocator called with data %p\n", data);
}
void main()
{
int64_t dims[] = { 1 };
int32_t data[] = { 10 };
... = TF_NewTensor(TF_INT32, dims, /*num_dims=*/ 1, data, /*len=*/ 1, my_deallocator, NULL);
}