A lightweight, robust, flexible, and containerized NFS server.
Nightly built containers are available for multiple arch, and now manifest are signed using cosign !
A Helm chart is available to ease deploy on Kubernetes
This is the only containerized NFS server that offers all of the following features:
- small (~15MB) Alpine Linux image
- NFS versions 3, 4, or both simultaneously
- clean teardown of services upon termination (no lingering
nfsd
processes on Docker host) - flexible construction of
/etc/exports
- extensive server configuration via environment variables
- human-readable logging (with a helpful debug mode)
- optional bonus features
- Kerberos security
- NFSv4 user ID mapping via
idmapd
- AppArmor compatibility
This forked version add the following features :
- Support for arch : linux/amd64, linux/arm64, linux/i386, linux/armhf, linux/armel
- Build everyday to stay up-to-date with upstream distribution (alpine)
- Requirements
- Usage
- Optional features
- Advanced
- Help!
- Remaining tasks
- Acknowledgements
-
The Docker host kernel will need the following kernel modules
nfs
nfsd
rpcsec_gss_krb5
(only if Kerberos is used)
You can manually enable these modules on the Docker host with:
modprobe {nfs,nfsd,rpcsec_gss_krb5}
or you can just allow the container to load them automatically.
-
The container will need to run with
CAP_SYS_ADMIN
(or--privileged
). This is necessary as the server needs to mount several filesystems inside the container to support its operation, and performing mounts from inside a container is impossible without these capabilities. -
The container will need local access to the files you'd like to serve via NFS. You can use Docker volumes, bind mounts, files baked into a custom image, or virtually any other means of supplying files to a Docker container.
Starting the ghcr.io/obeone/nfs-server
image will launch an NFS server. You'll need to supply some information upon container startup, which we'll cover below, but briefly speaking your docker run
command might look something like this:
docker run \
-v /host/path/to/shared/files:/some/container/path \
-v /host/path/to/exports.txt:/etc/exports:ro \
--cap-add SYS_ADMIN \
-p 2049:2049 \
ghcr.io/obeone/nfs-server
Let's break that command down into its individual pieces to see what's required for a successful server startup.
-
Provide the files to be shared over NFS
As noted in the requirements, the container will need local access to the files you'd like to share over NFS. Some ideas for supplying these files:
- bind mounts (
-v /host/path/to/shared/files:/some/container/path
) - volumes (
-v some_volume:/some/container/path
) - files baked into custom image (e.g. in a
Dockerfile
:COPY /host/files /some/container/path
)
You may use any combination of the above, or any other means to supply files to the container.
- bind mounts (
-
Provide your desired NFS exports (
/etc/exports
)You'll need to tell the server which container directories to share. You have three options for this; choose whichever one you prefer:
-
bind mount
/etc/exports
into the containerdocker run \ -v /host/path/to/exports.txt:/etc/exports:ro \ ... \ ghcr.io/obeone/nfs-server
-
provide each line of
/etc/exports
as an environment variableThe container will look for environment variables that start with
NFS_EXPORT_
and end with an integer. e.g.NFS_EXPORT_0
,NFS_EXPORT_1
, etc.docker run \ -e NFS_EXPORT_0='/container/path/foo *(ro,no_subtree_check)' \ -e NFS_EXPORT_1='/container/path/bar 123.123.123.123/32(rw,no_subtree_check)' \ ... \ ghcr.io/obeone/nfs-server
-
bake
/etc/exports
into a custom imagee.g. in a
Dockerfile
:FROM ghcr.io/obeone/nfs-server ADD /host/path/to/exports.txt /etc/exports
-
-
Use
--cap-add SYS_ADMIN
or--privileged
As noted in the requirements, the container will need additional privileges. So your
run
command will need either:docker run --cap-add SYS_ADMIN ... ghcr.io/obeone/nfs-server
or
docker run --privileged ... ghcr.io/obeone/nfs-server
Not sure which to use? Go for
--cap-add SYS_ADMIN
as it's the lesser of two evils. -
Expose the server ports
You'll need to open up at least one server port for your client connections. The ports listed in the examples below are the defaults used by this image and most can be customized.
-
If your clients connect via NFSv4 only, you can get by with just TCP port
2049
:docker run -p 2049:2049 ... ghcr.io/obeone/nfs-server
-
If you'd like to support NFSv3, you'll need to expose a lot more ports:
docker run \ -p 2049:2049 -p 2049:2049/udp \ -p 111:111 -p 111:111/udp \ -p 32765:32765 -p 32765:32765/udp \ -p 32767:32767 -p 32767:32767/udp \ ... \ ghcr.io/obeone/nfs-server
-
If you pay close attention to each of the items in this section, the server should start quickly and be ready to accept your NFS clients.
# mount <container-IP>:/some/export /some/local/path
- automatically load required kernel modules
- customizing which ports are used
- customizing NFS versions offered
- performance tuning
Please open an issue if you have any questions, constructive criticism, or can't get something to work.
- figure out why
rpc.nfsd
takes 5 minutes to startup/timeout unlessrpcbind
is running - add more examples
This work was based on prior projects: