TODO:
-
Fix bug in python setup.py develop (unable to copy module from build_ext)
-
Experiment with group convolution implementations to improve performance.
-
Use utility macros in onnx_xla and python_onnxifi to perform asserts
-
Clean up onnx_xla/backend.cc macros
-
Maybe move to one client (benchmark this?)
-
Add support for half in python interface to ONNXIFI
-
Add strided numpy array support ot python interface to ONNXIFI
-
Add weight descriptor support to the python interface to ONNXIFI
-
Benchmark two version of LRN(materializing square and not)
Steps to test:
-
Run "python setup.py install" or "python setup.py develop"
-
Start an XLA server with "./third_party/tensorflow/bazel-bin/tensorflow/compiler/xla/rpc/grpc_service_main_cpu --port=51000
-
To the backends ability to run a simple IR graph with a Relu operator, "cd build && ./tests"
-
To the backends ability to run a simple ModelProto graph with a Relu operator, "cd build && ./relu_model"
-
To execute a test using the python wrapper of onnxifi, "python test.py"
-
To run unit tests of node translations, execute a "python onnx_xla_test.py"