I'm currently trying to work through the tensorflow XLA ahead of time compilation work flow for the first time, and I've hit a problem while trying to build the final executable binary which includes the AOT compiled object.
I've used the tutorial here to generate the test_graph_tfgather.pb
and test_graph_tfgather.config.pbtxt
files. Then I've used the tfcompile
tool directly to produce MyClass.o
and MyClass.h
. So far so good.
I'm now building a simple makefile project which includes this compiled model, but I'm getting some errors related to Eigen. Could this be due to a different version of eigen3 being installed on my computer? I've also had to comment out the Eigen::ThreadPool lines due to eigen errors too so some version miss match may be the problem. Has anyone seen this problem before or does anyone have any ideas how to get this working?
Thanks.
The build errors:
g++ -c -std=c++11 -I . -I /usr/include/eigen3 -I /home/user/tensorflow_xla/tensorflow -I /usr/include main.cpp
In file included from /home/user/tensorflow_xla/tensorflow/tensorflow/compiler/xla/types.h:22:0,
from /home/user/tensorflow_xla/tensorflow/tensorflow/compiler/xla/executable_run_options.h:20,
from /home/user/tensorflow_xla/tensorflow/tensorflow/compiler/tf2xla/xla_compiled_cpu_function.h:22,
from MyClass.h:14,
from main.cpp:6:
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h: In static member function ‘static tensorflow::bfloat16 Eigen::NumTraits<tensorflow::bfloat16>::infinity()’:
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h:79:28: error: ‘infinity’ is not a member of ‘Eigen::NumTraits<float>’
return FloatToBFloat16(NumTraits<float>::infinity());
^
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h: In static member function ‘static tensorflow::bfloat16 Eigen::NumTraits<tensorflow::bfloat16>::quiet_NaN()’:
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h:83:28: error: ‘quiet_NaN’ is not a member of ‘Eigen::NumTraits<float>’
return FloatToBFloat16(NumTraits<float>::quiet_NaN());
^
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h: At global scope:
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h:95:34: error: ‘log’ is not a template function
const tensorflow::bfloat16& x) {
^
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h:101:34: error: ‘exp’ is not a template function
const tensorflow::bfloat16& x) {
^
/home/user/tensorflow_xla/tensorflow/tensorflow/core/framework/numeric_types.h:107:34: error: ‘abs’ is not a template function
const tensorflow::bfloat16& x) {
^
Makefile:10: recipe for target 'main.o' failed
main.cpp source:
#define EIGEN_USE_THREADS
#define EIGEN_USE_CUSTOM_THREAD_POOL
#include <iostream>
#include "third_party/eigen3/unsupported/Eigen/CXX11/Tensor"
#include "MyClass.h" // generated
int main(int argc, char** argv) {
//Eigen::ThreadPool tp(2); // Size the thread pool as appropriate.
//Eigen::ThreadPoolDevice device(&tp, tp.NumThreads());
MyClass matmul;
//matmul.set_thread_pool(&device);
// Set up args and run the computation.
const float args[12] = {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12};
std::copy(args + 0, args + 6, matmul.arg0_data());
std::copy(args + 6, args + 12, matmul.arg1_data());
matmul.Run();
// Check result
if (matmul.result0(0, 0) == 58) {
std::cout << "Success" << std::endl;
} else {
std::cout << "Failed. Expected value 58 at 0,0. Got:"
<< matmul.result0(0, 0) << std::endl;
}
return 0;
}
Makefile
EIGEN_INC=-I /usr/include/eigen3
TF_INC=-I /home/user/tensorflow_xla/tensorflow
CPPFLAGS=-c -std=c++11
xla_hw: main.o MyClass.o
g++ -o xla_hw main.o MyClass.o
main.o: main.cpp
g++ $(CPPFLAGS) -I . $(TF_INC) $(EIGEN_INC) -I /usr/include main.cpp
I've solved this problem now, it turns out there is a specific version of eigen3 included with tensorflow and you need to use this version for it to work. When tensorflow has been built the correct version of eigen3 is located at <tensorflow path>bazel-tensorflow/external/eigen_archive
Below is the working makefile which includes the correct Eigen path as well as the libraries needed to link the project.
TF_INC=-I /home/user/tensorflow_xla/tensorflow/bazel-tensorflow/external/eigen_archive -I /home/user/tensorflow_xla/tensorflow
TF_LIBS=-L/home/user/tensorflow_xla/tensorflow/bazel-bin/tensorflow/compiler/tf2xla/ -lxla_compiled_cpu_function -L/home/user/tensorflow_xla/tensorflow/bazel-bin/tensorflow/compiler/aot -lruntime
CPPFLAGS=-c -std=c++11
xla_hw: main.o MyClass.o
g++ -o xla_hw main.o MyClass.o $(TF_LIBS)
main.o: main.cpp
g++ $(CPPFLAGS) -I . $(TF_INC) -I /usr/include main.cpp