A helm chart for installing a tftp server for PXE booting
  • Smarty 79.5%
  • Makefile 20.5%
Find a file
2026-02-21 20:25:15 +00:00
templates
.gitignore
.helmignore
Chart.yaml
Makefile
README.md
screenshot.png
values.yaml

pxeboot

This implements a TFTP server pre-configured for installing various Debian versions directly over the network by clients which support the PXE boot protocol.

For this to work, you must configure your DHCP server to tell the clients where to boot from:

next-server << pxeboot server>>;
filename "lpxelinux.0";

When booting, clients should be presented with a menu similar to this:

Screen shot of boot menu

About TFTP

TFTP is slow. It is named the "Trivial File Transfer Protocol" for a reason: its sole redeeming feature is that it is simple to implement in the ROM of a network card: It does not require much code and it has a small memory footprint - which was important in 1981.

Nowadays it seems not only quaint but downright archaic. But it still has wide support across many devices and is frequently used for e.g. PXE booting of servers and other devices.

The TFTP protocol is not easy to NAT - and kubernetes generally makes heavy use of NAT to route traffic between pods and services.

To avoid this trouble, the pods will by default use the host network. This means that you cannot run more than one such pod per kubernetes node - the kube scheduler will prevent this as they would otherwise clash on the external ports occupied.

It is probably possible to run the tftp service using a load balancer (i.e. type: LoadBalancer), assuming the load balancer supports proxy/NAT of the TFTP protocol. On Linux, this would probably make heavy use of the netfilter kernel modules for TFTP: nf_nat_tftp and nf_conntrack_tftp. A later version of this helm chart/container may do so.

Getting The IP Address

The TFTP server will be available on the node ip of the node running the pod. Unfortunately this means that it is essentially unpredictable (unless you only have 1 node in your kubernetes cluster, which is unlikely).

There are two approaches to solving this problem: external-dns & node selectors.

Use External-DNS

If we are happy with a predictable DNS Name, then the actual IP address does not matter. We can get a predictable DNS Name by making use of External-DNS (https://github.com/kubernetes-sigs/external-dns - The Bitnami helm chart for external-dns is nice: https://github.com/bitnami/containers/tree/main/bitnami/external-dns).

Simply add an annotation to the service in the values file:

service:
  annotations:
    external-dns.alpha.kubernetes.io/hostname: tftp.example.com

and external-dns will take care of making the DNS entry point to whichever node(s) runs the pod. If there are multiple instances of the pod (i.e. you set replicaCount > 1) then the DNS entry will point to all the involved nodes.

Use Node Selectors

As a less attractive option you can force the pod to run on a specific node by assigning a label to the node - e.g.:

kubectl label node somenode.example.com jorgensen.org.uk/pxeboot=allowed

and specifying a nodeSelector in the values file to select this node:

nodeSelector:
  "jorgensen.org.uk/pxeboot": "allowed"