This provides scripts to build a distributable binary release of the BSP for
the HPSC Chiplet shipped alongside the source release tarball. The binary
release supports one configuration of the HW emulator and the SSW software
stack out of many possible configurations supported by the source release, e.g.
the source release (but not the binary release) supports building images to
boot from non-volatile memory, boot from preloaded binaries, boot with BL0
bootloader or without, boot with or without power-on test suite, boot into
Yocto Linux or into bare Busybox, boot only a subset of subsystems, boot
on ZeBu and HAPS emulators, etc. To develop, build, run, and test BSP
components in place with dependency tracking for incremental builds that build
only components needed for a given configuration, in all supported
configurations (and to build on other distributions, or build without root),
build from source in BUILD/src
directory according to instructions in
BUILD/src/README.md
.
This repository includes:
build-hpsc-bsp.sh
- top-level build script.build-hpsc-bare.sh
- builds artifacts with the bare metal compiler, including the TRCH firmware for the Cortex-M4 and u-boot for the Cortex-R52s.build-hpsc-host.sh
- builds artifacts for the host development system, including QEMU and associated utilities, and developer tools like the HPSC Eclipse distribution.build-hpsc-rtems.sh
- builds the RTEMS BSP and reference software for the RTPS Cortex-R52s, and an unused BSP for the TRCH.build-hpsc-yocto.sh
- builds Yocto Linux SDK for the HPPS Cortex-A53 clusters, including the reference root filesystem and Linux test utilities.build-hpsc-private.sh
- builds private sources.build-recipe.sh
- build individual component recipes; wrapped by other build scripts.run-qemu.sh
- runs QEMU using artifacts deployed by the build scripts (prior to packaging).
Scripts must be run from the same directory.
Use the -h
flag to print script usage help.
Build scripts download from the git repositories located at: https://github.com/orgs/ISI-apex/teams/hpsc/repositories
To update the sources that the BSP build scripts use, you must modify the build recipes, located in build-recipes/
.
There are some helper scripts in utils/
to automate upgrading dependencies.
Some sources (ATF, linux, and u-boot for the HPPS) are managed by Yocto recipes, and are thus transitive dependencies.
These recipes are found in the meta-hpsc project and must be configured there.
The meta-hpsc
project revision is then configured with a local recipe in build-recipes/
, like other BSP dependencies.
If you need to add new build recipes, read build-recipes/README.md
and walk through build-recipe.sh
and build-recipes/ENV.sh
.
See existing recipes for examples.
BSP QEMU configuration files and scripts are managed by the bsp
build recipe and are source-controlled in build-recipes/bsp/utils/
.
The top-level build-hpsc-bsp.sh
script wraps the other build scripts documented below and packages the BSP's binary and source tarballs.
By default, it will run through the following steps, which may be run independently using the -a
flag so long as previous steps are complete:
fetch
- download toolchains and fetch all sources required to build offlinebuild
- build sources for the binary releasestage
- stage artifacts into the directory structure to be packagedpackage
- package the staged directory structure into the BSP binary release archivepackage-sources
- package sources into the BSP source release archive
To perform a build:
./build-hpsc-bsp.sh
All files are downloaded and built in a working directory, which defaults to BUILD
.
You may optionally specify a different working directory using -w
.
To independently fetch and build host development system artifacts:
./build-hpsc-host.sh
The build generates in ${WORKING_DIR}/deploy
:
conf
- QEMU configuration directoryqemu-env.sh
- QEMU configuration parametersrun-qemu.sh
- QEMU launch scripttest.sh
- Pytest launch script
and in ${WORKING_DIR}/deploy/sdk
:
qemu-devicetrees
- QEMU device tree directoryqemu-bridge-helper
- QEMU utility for creating TAP devicesqemu-system-aarch64
- QEMU binaryhpsc-eclipse-cpp-2018-09-linux-gtk-x86_64.tar.gz
- HPSC Eclipse installertools
- a directory with host utility scripts and binaries
and non-relocatable SDKs in ${WORKING_DIR}/env
:
RSB-5
- RTEMS Source Builder SDKRT-5
- RTEMS Tools
To independently fetch and build the bare metal toolchain and dependent artifacts:
./build-hpsc-bare.sh
The build generates in ${WORKING_DIR}/sdk/toolchains
:
gcc-arm-none-eabi-7-2018-q2-update-linux.tar.bz2
- ARM bare metal toolchain
and in ${WORKING_DIR}/deploy/ssw/trch
:
trch.bin
- TRCH firmwaresyscfg-schema.json
- schema for system configuration parsed by TRCH
and in ${WORKING_DIR}/deploy/ssw/rtps/r52
:
u-boot.bin
- u-boot for the RTPS R52s
The bare metal toolchain is also installed locally at ${WORKING_DIR}/env/gcc-arm-none-eabi-7-2018-q2-update
.
To independently fetch and build the RTEMS TRCH and R52 BSPs, and dependent artifacts:
./build-hpsc-rtems.sh
The build generates non-relocatable BSPs in ${WORKING_DIR}/env
:
RTEMS-5-RTPS-R52
- RTPS R52 RTEMS BSP for building RTEMS applicationsRTEMS-5-TRCH
- RTPS R52 RTEMS BSP for building RTEMS applications
and in ${WORKING_DIR}/deploy/ssw/rtps/r52
:
rtps-r52.img
- RTPS R52 firmware
To independently fetch and build the Yocto SDK and dependent artifacts:
./build-hpsc-yocto.sh
The build generates in ${WORKING_DIR}/deploy/sdk/toolchains
:
poky-glibc-x86_64-core-image-hpsc-aarch64-hpsc-chiplet-toolchain-2.7.1.sh
- the Yocto SDK installer
and in ${WORKING_DIR}/deploy/ssw/hpps/
:
arm-trusted-firmware.bin
- the Arm Trusted Firmware binarycore-image-hpsc-hpsc-chiplet.cpio.gz.u-boot
- the Linux root file system for booting the dual A53 clusterhpsc.dtb
- the Chiplet device tree for SMP LinuxImage.gz
- the Linux kernel binary imageu-boot.dtb
- the U-boot device treeu-boot-nodtb.bin
- the U-boot bootloader for the dual A53 cluster
and in ${WORKING_DIR}/deploy/ssw/
:
tests
- a directory with the pytest test infrastructure
The Yocto SDK is also installed locally at ${WORKING_DIR}/env/yocto-hpps-sdk
.
After the builds complete, developers can execute the top-level run-qemu.sh
script to launch QEMU.
This approach uses the artifacts in the working deploy
directory, avoiding the need to always create and extract the BSP release tarball during development.
WARNING: Do not edit files in the working directory ${WORKING_DIR}
(such as deploy/qemu-env.sh
)!
${WORKING_DIR}
is managed exclusively by the build scripts.
To override the default QEMU configuration, create and use a custom environment file outside the working directory.
For example:
./run-qemu.sh -- -e /path/to/qemu-env-override.sh