-
Notifications
You must be signed in to change notification settings - Fork 15
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
2ba625f
commit 46e3b5e
Showing
6 changed files
with
137 additions
and
3 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,61 @@ | ||
title: CMake Build Process | ||
|
||
Installation of FTorch is done by CMake. | ||
|
||
This is controlled by the `CMakeLists.txt` file in `src/`. | ||
|
||
## Basic instructions | ||
|
||
To build the library, first clone it from github to your local machine and then run: | ||
```bash | ||
cd FTorch/src/ | ||
mkdir build | ||
cd build | ||
``` | ||
|
||
Then invoke CMake with the Release build option, plus any other options as required | ||
from the table below in [CMake build options](#cmake-build-options) | ||
(note: you will likely _need_ to add some of these options to ): | ||
```bash | ||
cmake .. -DCMAKE_BUILD_TYPE=Release | ||
``` | ||
|
||
Finally build and install the library: | ||
```bash | ||
cmake --build . --target install | ||
``` | ||
|
||
## CMake build options | ||
|
||
It is likely that you will need to provide at least the `CMAKE_PREFIX_PATH` flag. | ||
The following CMake flags are available and can be passed as arguments through `-D<Option>=<Value>`: | ||
|
||
| Option | Value | Description | | ||
| ------------------------------------------------------------------------------------------------- | ---------------------------- | --------------------------------------------------------------| | ||
| [`CMAKE_Fortran_COMPILER`](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html) | `ifort` / `gfortran` | Specify a Fortran compiler to build the library with. This should match the Fortran compiler you're using to build the code you are calling this library from. | | ||
| [`CMAKE_C_COMPILER`](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html) | `icc` / `gcc` | Specify a C compiler to build the library with | | ||
| [`CMAKE_CXX_COMPILER`](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html) | `icpc` / `g++` | Specify a C++ compiler to build the library with | | ||
| [`CMAKE_PREFIX_PATH`](https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html) | `</path/to/libTorch/>` | Location of Torch installation<sup>1</sup> | | ||
| [`CMAKE_INSTALL_PREFIX`](https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html) | `</path/to/install/lib/at/>` | Location at which the library files should be installed. By default this is `/usr/local` | | ||
| [`CMAKE_BUILD_TYPE`](https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html) | `Release` / `Debug` | Specifies build type. The default is `Debug`, use `Release` for production code| | ||
| `ENABLE_CUDA` | `TRUE` / `FALSE` | Specifies whether to check for and enable CUDA<sup>2</sup> | | ||
|
||
|
||
<sup>1</sup> _The path to the Torch installation needs to allow cmake to locate the relevant Torch cmake files. | ||
If Torch has been [installed as libtorch](https://pytorch.org/cppdocs/installing.html) | ||
then this should be the absolute path to the unzipped libtorch distribution. | ||
If Torch has been installed as PyTorch in a python [venv (virtual environment)](https://docs.python.org/3/library/venv.html), | ||
e.g. with `pip install torch`, then this should be `</path/to/venv/>lib/python<3.xx>/site-packages/torch/`._ | ||
|
||
<sup>2</sup> _This is often overridden by PyTorch. When installing with pip, the `index-url` flag can be used to ensure a CPU or GPU only version is installed, e.g. | ||
`pip install torch --index-url https://download.pytorch.org/whl/cpu` | ||
or | ||
`pip install torch --index-url https://download.pytorch.org/whl/cu118` | ||
(for CUDA 11.8). URLs for alternative versions can be found [here](https://pytorch.org/get-started/locally/)._ | ||
|
||
For example, to build on a unix system using the gnu compilers and install to `$HOME/FTorchbin/` | ||
we would need to run: | ||
``` | ||
``` | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
title: Other documentation | ||
|
||
[TOC] | ||
|
||
## GPU Support | ||
|
||
In order to run a model on GPU, two main changes are required: | ||
|
||
1) When saving your TorchScript model, ensure that it is on the GPU. | ||
For example, when using [pt2ts.py](utils/pt2ts.py), this can be done by | ||
uncommenting the following lines: | ||
|
||
``` { .python} | ||
device = torch.device('cuda') | ||
trained_model = trained_model.to(device) | ||
trained_model.eval() | ||
trained_model_dummy_input_1 = trained_model_dummy_input_1.to(device) | ||
trained_model_dummy_input_2 = trained_model_dummy_input_2.to(device) | ||
``` | ||
Note: this also moves the dummy input tensors to the GPU. This is not necessary for | ||
saving the model, but the tensors must also be on the GPU to test that the models runs. | ||
|
||
2) When calling `torch_tensor_from_blob` in Fortran, the device for the input tensor(s), | ||
but not the output tensor(s), should be set to `torch_kCUDA`, rather than | ||
`torch_kCPU`. This ensures that the inputs are on the same device as the model. | ||
|
||
## Useful resources | ||
|
||
* [The libtorch API](https://pytorch.org/cppdocs/api/library_root.html) | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
title: When to transpose data | ||
|
||
In the ResNet18 example, it was expected that the shape and indices of `in_data` in resnet_infer_fortran.f90 match that of `input_batch` in resnet18.py, i.e. `in_data(i, j, k, l) == input_batch[i, j, k, l]`. | ||
|
||
Since C is row-major (rows are contiguous in memory), whereas Fortran is column-major (columns are contiguous), it is therefore necessary to perform a transpose when converting from the NumPy array to the Fortran array to ensure that their indices are consistent. | ||
|
||
In this example code, the NumPy array is transposed before being flattened and saved to binary, allowing Fortran to `reshape` the flatted array into the correct order. | ||
|
||
An alternative would be to save the NumPy array with its original shape, but perform a transpose during or after reading the data into Fortran, e.g. using: | ||
|
||
``` | ||
in_data = reshape(flat_data, shape(in_data), order=(4,3,2,1)) | ||
``` | ||
|
||
For more general use, it should be noted that the function used to create the input tensor from `input_batch`, `torch_tensor_from_blob`, performs a further transpose, which is required to allow the tensor to interact correctly with the model. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
title: Troubleshooting | ||
|
||
If you are experiencing problems building or using FTorch please see below for guidance on common problems. | ||
|
||
[TOC] | ||
|
||
## Windows | ||
|
||
If possible we recommend using the [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) (WSL) to build the library. | ||
In this case the build process is the same as for a Linux environment. | ||
|
||
If you need | ||
|
||
### Visual Studio | ||
|
||
Use Visual Studio and the Intel Fortran Compiler | ||
In this case you must install | ||
* [Visual Studio](https://visualstudio.microsoft.com/) | ||
* [Intel OneAPI Base and HPC toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) (ensure that the Intel Fortran compiler and VS integration is selected). | ||
|
||
You should then be able to build from CMD following the | ||
|
||
### MinGW | ||
|
||
It may be tempting to build on Windows using MinGW. | ||
However, [libtorch does not currently support MinGW](https://github.com/pytorch/pytorch/issues/15099). | ||
Instead please build using Visual Studio and the intel fortran compiler (ifort) as | ||
detailed in the project README. | ||
|
||
|