Skip to content

Commit

Permalink
Add additional page drafts to docs.
Browse files Browse the repository at this point in the history
  • Loading branch information
jatkinson1000 committed Oct 20, 2023
1 parent 2ba625f commit 46e3b5e
Show file tree
Hide file tree
Showing 6 changed files with 137 additions and 3 deletions.
3 changes: 0 additions & 3 deletions .github/workflows/build_docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,6 @@ on:
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

permissions:
contents: write

# Workflow run - one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build-docs"
Expand Down
1 change: 1 addition & 0 deletions FTorch.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ author: ICCS Cambridge
license: mit
github: https://github.com/Cambridge-ICCS
project_github: https://github.com/Cambridge-ICCS/FTorch
page_dir: pages
src_dir: ./src
./utils
output_dir: ./doc
Expand Down
61 changes: 61 additions & 0 deletions pages/cmake.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
title: CMake Build Process

Installation of FTorch is done by CMake.

This is controlled by the `CMakeLists.txt` file in `src/`.

## Basic instructions

To build the library, first clone it from github to your local machine and then run:
```bash
cd FTorch/src/
mkdir build
cd build
```

Then invoke CMake with the Release build option, plus any other options as required
from the table below in [CMake build options](#cmake-build-options)
(note: you will likely _need_ to add some of these options to ):
```bash
cmake .. -DCMAKE_BUILD_TYPE=Release
```

Finally build and install the library:
```bash
cmake --build . --target install
```

## CMake build options

It is likely that you will need to provide at least the `CMAKE_PREFIX_PATH` flag.
The following CMake flags are available and can be passed as arguments through `-D<Option>=<Value>`:

| Option | Value | Description |
| ------------------------------------------------------------------------------------------------- | ---------------------------- | --------------------------------------------------------------|
| [`CMAKE_Fortran_COMPILER`](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html) | `ifort` / `gfortran` | Specify a Fortran compiler to build the library with. This should match the Fortran compiler you're using to build the code you are calling this library from. |
| [`CMAKE_C_COMPILER`](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html) | `icc` / `gcc` | Specify a C compiler to build the library with |
| [`CMAKE_CXX_COMPILER`](https://cmake.org/cmake/help/latest/variable/CMAKE_LANG_COMPILER.html) | `icpc` / `g++` | Specify a C++ compiler to build the library with |
| [`CMAKE_PREFIX_PATH`](https://cmake.org/cmake/help/latest/variable/CMAKE_PREFIX_PATH.html) | `</path/to/libTorch/>` | Location of Torch installation<sup>1</sup> |
| [`CMAKE_INSTALL_PREFIX`](https://cmake.org/cmake/help/latest/variable/CMAKE_INSTALL_PREFIX.html) | `</path/to/install/lib/at/>` | Location at which the library files should be installed. By default this is `/usr/local` |
| [`CMAKE_BUILD_TYPE`](https://cmake.org/cmake/help/latest/variable/CMAKE_BUILD_TYPE.html) | `Release` / `Debug` | Specifies build type. The default is `Debug`, use `Release` for production code|
| `ENABLE_CUDA` | `TRUE` / `FALSE` | Specifies whether to check for and enable CUDA<sup>2</sup> |


<sup>1</sup> _The path to the Torch installation needs to allow cmake to locate the relevant Torch cmake files.
If Torch has been [installed as libtorch](https://pytorch.org/cppdocs/installing.html)
then this should be the absolute path to the unzipped libtorch distribution.
If Torch has been installed as PyTorch in a python [venv (virtual environment)](https://docs.python.org/3/library/venv.html),
e.g. with `pip install torch`, then this should be `</path/to/venv/>lib/python<3.xx>/site-packages/torch/`._

<sup>2</sup> _This is often overridden by PyTorch. When installing with pip, the `index-url` flag can be used to ensure a CPU or GPU only version is installed, e.g.
`pip install torch --index-url https://download.pytorch.org/whl/cpu`
or
`pip install torch --index-url https://download.pytorch.org/whl/cu118`
(for CUDA 11.8). URLs for alternative versions can be found [here](https://pytorch.org/get-started/locally/)._

For example, to build on a unix system using the gnu compilers and install to `$HOME/FTorchbin/`
we would need to run:
```
```

30 changes: 30 additions & 0 deletions pages/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
title: Other documentation

[TOC]

## GPU Support

In order to run a model on GPU, two main changes are required:

1) When saving your TorchScript model, ensure that it is on the GPU.
For example, when using [pt2ts.py](utils/pt2ts.py), this can be done by
uncommenting the following lines:

``` { .python}
device = torch.device('cuda')
trained_model = trained_model.to(device)
trained_model.eval()
trained_model_dummy_input_1 = trained_model_dummy_input_1.to(device)
trained_model_dummy_input_2 = trained_model_dummy_input_2.to(device)
```
Note: this also moves the dummy input tensors to the GPU. This is not necessary for
saving the model, but the tensors must also be on the GPU to test that the models runs.

2) When calling `torch_tensor_from_blob` in Fortran, the device for the input tensor(s),
but not the output tensor(s), should be set to `torch_kCUDA`, rather than
`torch_kCPU`. This ensures that the inputs are on the same device as the model.

## Useful resources

* [The libtorch API](https://pytorch.org/cppdocs/api/library_root.html)

15 changes: 15 additions & 0 deletions pages/transposing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
title: When to transpose data

In the ResNet18 example, it was expected that the shape and indices of `in_data` in resnet_infer_fortran.f90 match that of `input_batch` in resnet18.py, i.e. `in_data(i, j, k, l) == input_batch[i, j, k, l]`.

Since C is row-major (rows are contiguous in memory), whereas Fortran is column-major (columns are contiguous), it is therefore necessary to perform a transpose when converting from the NumPy array to the Fortran array to ensure that their indices are consistent.

In this example code, the NumPy array is transposed before being flattened and saved to binary, allowing Fortran to `reshape` the flatted array into the correct order.

An alternative would be to save the NumPy array with its original shape, but perform a transpose during or after reading the data into Fortran, e.g. using:

```
in_data = reshape(flat_data, shape(in_data), order=(4,3,2,1))
```

For more general use, it should be noted that the function used to create the input tensor from `input_batch`, `torch_tensor_from_blob`, performs a further transpose, which is required to allow the tensor to interact correctly with the model.
30 changes: 30 additions & 0 deletions pages/troubleshooting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
title: Troubleshooting

If you are experiencing problems building or using FTorch please see below for guidance on common problems.

[TOC]

## Windows

If possible we recommend using the [Windows Subsystem for Linux](https://learn.microsoft.com/en-us/windows/wsl/) (WSL) to build the library.
In this case the build process is the same as for a Linux environment.

If you need

### Visual Studio

Use Visual Studio and the Intel Fortran Compiler
In this case you must install
* [Visual Studio](https://visualstudio.microsoft.com/)
* [Intel OneAPI Base and HPC toolkit](https://www.intel.com/content/www/us/en/developer/tools/oneapi/base-toolkit-download.html) (ensure that the Intel Fortran compiler and VS integration is selected).

You should then be able to build from CMD following the

### MinGW

It may be tempting to build on Windows using MinGW.
However, [libtorch does not currently support MinGW](https://github.com/pytorch/pytorch/issues/15099).
Instead please build using Visual Studio and the intel fortran compiler (ifort) as
detailed in the project README.


0 comments on commit 46e3b5e

Please sign in to comment.