Building for the Xilinx KR260 has always been powerful, but not always fast. Every small change—kernel tweaks, device-tree edits, PL bitstream updates—usually meant flashing SD cards, rebooting boards, and waiting. I wanted a development environment that felt as fast and flexible as software engineering, not traditional embedded workflow.
That’s where this project began.
I set out to build a complete, production-ready custom Yocto Linux distribution for the KR260 that could network-boot instantly, load new kernels and root filesystems without SD cards, and support rapid FPGA and RPU development. The goal was simple: create a foundation that accelerates every future project on the KR260.
By combining Yocto Scarthgap (5.1) with Xilinx Tools 2025.2, I created a custom Linux image that integrates:
- PYNQ 3.1.2 for interactive FPGA development
- XRT & ZOCL for bitstream loading and accelerati
- Jupyter Notebook server for live experimentation at :9090
- Remoteproc for loading and controlling RPU firmware
- UIO drivers enabling userspace access to PL hardware
- Network booting via TFTP + NFS for instant iteration
With this setup, modifying a kernel parameter or deploying a new FPGA bitstream is as simple as updating a file on the NFS server—no flashing, no downtime, no friction.
System ArchitectureBuild Host Requirements:
- Ubuntu 24.04 LTS recommended
- ≥250 GB free disk space
- ≥16 GB RAM (32 GB recommended)
- Standard Yocto build dependencies
Install Build Dependencies (Ubuntu 24.04):
sudo apt-get update
sudo apt-get install -y gawk wget git repo diffstat unzip texinfo gcc-multilib \
build-essential chrpath socat cpio python3 python3-pip python3-pexpect \
xz-utils debianutils iputils-ping python3-git python3-jinja2 \
libsdl1.2-dev pylint xterm python3-subunit mesa-common-dev zstd liblz4-tool \
libegl1-mesa-devRepository StructureThe project follows a standard Yocto layer structure:
Xilinx_KR260_Yocto/
├── Makefile # Top-level build orchestration
├── README.md # Project documentation
├── conf/
│ ├── bblayers.conf.template # Yocto layer configuration
│ └── local.conf.template # Build configuration template
├── dts/
│ ├── kr260_overlay.dtso # Device tree overlay for hardware
│ └── Makefile # DTS compilation
├── meta-kr260/ # Custom Yocto layer
│ ├── recipes-core/
│ │ └── images/
│ │ └── kria-image-kr260.bb # Rootfs image recipe
│ ├── recipes-kernel/
│ │ └── linux/
│ │ └── linux-xlnx_%.bbappend # Kernel configuration
│ └── recipes-bsp/
│ └── u-boot/
│ └── u-boot-xlnx_%.bbappend # U-Boot configuration
├── tools/
│ └── image.its # FIT image template
└── qspi_image/
└── BOOT.BIN # Prebuilt boot imageBuild ProcessStep 1: Fetch Yocto Sources
The project uses Xilinx's official Yocto manifests for 2025.2:
make setup-sourcesThis command:
- Initializes the
repotool with Xilinx's Yocto manifest - Syncs all required Yocto layers (poky, meta-xilinx, meta-kria, etc.)
- Downloads approximately 10-15 GB of source code
Step 2: Configure Build Environment
make setup-envThis creates the build configuration from templates, setting up:
- Machine:
k26-smk-kr-sdt(KR260 with commercial SOM) - Distribution: EDF (Embedded Development Framework)
- Network configuration: TFTP/NFS server IP, board IP, gateway, etc.
Step 3: Build Components
The build system provides several targets:
# Build kernel FIT image (kernel + device tree)
make build-kernel
# Build root filesystem
make build-rootfs
# Build SDK toolchain for cross-compilation
make build-sdk
# Build everything
make build-allBuild Outputs:
- Kernel FIT Image:
build/tmp-glibc/deploy/images/k26-smk-kr-sdt/image.ub - Rootfs Archive:
build/tmp-glibc/deploy/images/k26-smk-kr-sdt/kria-image-kr260-k26-smk-kr-sdt.rootfs.tar.gz - SDK Installer:
build/tmp-glibc/deploy/sdk/kria-toolchain-kr260-sdk.sh
The custom layer extends the base Kria image with FPGA development packages:
Key Recipes:
Image Recipe (recipes-core/images/kria-image-kr260.bb):
- Inherits from
kria-image-full-cmdline - Adds PYNQ 3.1.2, XRT, ZOCL, gRPC, libmetal
- Configures Jupyter Notebook server
- Sets up default user (
xilinx/xilinx) with sudo privileges - Image Recipe (
recipes-core/images/kria-image-kr260.bb):Inherits fromkria-image-full-cmdlineAdds PYNQ 3.1.2, XRT, ZOCL, gRPC, libmetalConfigures Jupyter Notebook serverSets up default user (xilinx/xilinx) with sudo privileges
Kernel Configuration (recipes-kernel/linux/linux-xlnx_%.bbappend):
- Enables UIO (Userspace I/O) drivers for direct hardware access
- Configures remoteproc support for RPU management
- Adds device tree overlay support
- Kernel Configuration (
recipes-kernel/linux/linux-xlnx_%.bbappend):Enables UIO (Userspace I/O) drivers for direct hardware accessConfigures remoteproc support for RPU managementAdds device tree overlay support
U-Boot Configuration (recipes-bsp/u-boot/u-boot-xlnx_%.bbappend):
- Configures network boot parameters
- Sets up TFTP boot command
- U-Boot Configuration (
recipes-bsp/u-boot/u-boot-xlnx_%.bbappend):Configures network boot parametersSets up TFTP boot command
Network booting provides significant advantages for FPGA development:
1. Rapid Development Cycles- No SD Card Flashing: Eliminates the need to physically remove and flash SD cards
- Instant Updates: Kernel and rootfs changes take effect on next boot
- Multiple Board Support: One NFS server can serve multiple development boards
- Version Control: Easy rollback by switching NFS directories
- Unlimited Rootfs Size: NFS rootfs is only limited by server storage, not SD card capacity
- Shared Development Environment: All developers work with the same rootfs
- Easy Package Management: Install packages on the server, available to all boards
- Cross-Compilation: Build on host, test on board without file transfer
- Source Code Access: Edit code on host, compile and run on board via NFS
- Debugging: Attach debuggers without worrying about storage constraints
- Backup and Recovery: NFS rootfs can be versioned and backed up easily
Network Settings (configurable in conf/local.conf):
# TFTP/NFS Server IP (development host)
NFS_SERVER = "172.20.1.1"
# Board IP address
BOARD_IP = "172.20.1.2"
# Network configuration
BOARD_GATEWAY = "172.20.1.1"
BOARD_NETMASK = "255.255.255.0"
# NFS root directory
NFS_ROOT = "/nfsroot"U-Boot Boot Command (configured in device tree overlay):
bootcmd_tftp = "setenv serverip 172.20.1.1; \
setenv ipaddr 172.20.1.2; \
tftpboot 0x10000000 image.ub; \
bootm 0x10000000"Kernel Boot Arguments (from device tree overlay):
bootargs = "earlycon console=ttyPS1,115200 clk_ignore_unused \
root=/dev/nfs rw \
nfsroot=172.20.1.1:/nfsroot,tcp,vers=3,timeo=14 \
ip=172.20.1.2::172.20.1.1:255.255.255.0:Xilinx-KR260:eth0:off \
cma=900M \
uio_pdrv_genirq.of_id=generic-uio"Setting Up TFTP and NFS ServersTFTP Server Setup (Ubuntu):
# Install TFTP server
sudo apt-get install tftpd-hpa
# Configure TFTP directory
sudo mkdir -p /tftpboot
sudo chmod 777 /tftpboot
# Install kernel image
sudo cp build/tmp-glibc/deploy/images/k26-smk-kr-sdt/image.ub /tftpboot/NFS Server Setup (Ubuntu):
# Install NFS server
sudo apt-get install nfs-kernel-server
# Create NFS root directory
sudo mkdir -p /nfsroot
# Extract rootfs
sudo tar -xzf build/tmp-glibc/deploy/images/k26-smk-kr-sdt/kria-image-kr260-k26-smk-kr-sdt.rootfs.tar.gz -C /nfsroot
# Configure NFS exports
echo "/nfsroot *(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports
# Restart NFS server
sudo systemctl restart nfs-kernel-serverAutomated Installation (using Makefile):
# Install kernel to TFTP
make install-kernel
# Install rootfs to NFS
make install-rootfsPerformance Considerations- Network Speed: Gigabit Ethernet provides ~100 MB/s, sufficient for most development tasks
- NFS Version: Using NFSv3 (TCP) for reliability and compatibility
- Caching: Linux kernel caches NFS filesystem operations for improved performance
- Latency: Network latency is negligible for typical development workflows
PYNQ (Python Productivity for Zynq) provides a Python interface to the programmable logic:
# Example: Loading a bitstream and controlling hardware
from pynq import Overlay
overlay = Overlay("led_control.bit")
led_ctrl = overlay.led_control
led_ctrl.write(0x01, 0xFF) # Turn on all LEDsBenefits:
- Rapid Prototyping: Test FPGA designs without C/C++ compilation
- Interactive Development: Jupyter Notebooks provide immediate feedback
- Python Ecosystem: Leverage NumPy, Matplotlib, Pandas for data analysis
- Educational: Lower barrier to entry for FPGA development
XRT provides the runtime infrastructure for FPGA acceleration:
- OpenCL Support: ZOCL (Zynq OpenCL) enables OpenCL kernels on the FPGA
- Device Management:
/dev/dri/renderD128for GPU/FPGA rendering - Memory Management: CMA (Contiguous Memory Allocator) with 900MB reserved
- Bitstream Loading: Dynamic bitstream loading via XRT APIs
Pre-configured Jupyter server for interactive FPGA development:
- Port: 9090 (configurable)
- Authentication: Password-protected (
xilinx/xilinx) - Notebook Directory:
/home/xilinx/Notebook - Auto-start: Enabled via systemd service
Access:
http://172.20.1.2:90904. Remoteproc SupportLinux remoteproc framework enables APU to manage RPU firmware:
# List available remoteproc devices
ls /sys/class/remoteproc/
# Load and start RPU firmware
echo firmware.elf > /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
# Stop RPU
echo stop > /sys/class/remoteproc/remoteproc0/stateDevice Tree Configuration:
- R5F cluster in split mode (independent cores)
- 32MB reserved memory region for firmware loading
- IPI channels for APU-RPU communication
UIO enables direct hardware access from userspace:
// Example: Accessing PL registers via UIO
int fd = open("/dev/uio0", O_RDWR);
void *ptr = mmap(NULL, 0x10000, PROT_READ|PROT_WRITE, MAP_SHARED, fd, 0);
uint32_t *reg = (uint32_t *)(ptr + 0x00);
*reg = 0x12345678; // Write to PL registerConfigured Devices:
- PL fabric registers (
fabric@A0000000) - Shared memory regions (APU-RPU, PL-RPU)
- IPI channels for inter-processor communication
- AXI interrupt controller
Python Development:
- Python 3.11+ with development headers
- pip, setuptools, wheel for package management
- Scientific stack: NumPy, Matplotlib, Pandas, Pillow
System Utilities:
- vim, nano for text editing
- htop for system monitoring
- net-tools, iputils for network debugging
- e2fsprogs for filesystem management
Network Tools:
- SSH server (OpenSSH) for remote access
- tcpdump for packet capture
- iptables for firewall configuration
For distributed systems and microservices:
# Example: gRPC service for remote FPGA control
import grpc
from proto import fpga_control_pb2_grpc
channel = grpc.insecure_channel('192.168.1.100:50051')
stub = fpga_control_pb2_grpc.FPGAControlStub(channel)
response = stub.LoadBitstream(fpga_control_pb2.BitstreamRequest(path='/path/to/bitstream.bit'))8. Libmetal and OpenAMPFor shared-memory communication between APU, RPU, and PL:
- Libmetal: Low-level shared memory and messaging APIs
- OpenAMP: Remote processor management framework
- RPMsg: Remote processor messaging protocol
Default User:
- Username:
xilinx - Password:
xilinx - UID/GID: 1000/1000
- Sudo privileges: Enabled
- Home directory:
/home/xilinx
Hostname:Xilinx-KR260
Init System: systemd (replaces SysV init)
Environment Variables:
XILINX_XRT=/usr
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/lib/xrt/module/Device Tree Overlay for Hardware ConfigurationThe device tree overlay (dts/kr260_overlay.dtso) configures hardware resources for the KR260 platform.
rf5ss@ff9a0000 {
compatible = "xlnx,zynqmp-r5fss";
xlnx,cluster-mode = <1>; // Split mode
xlnx,tcm-mode = <1>; // Split TCM
r5f_0: r5f@0 {
compatible = "xlnx,zynqmp-r5f";
memory-region = <&rproc_0_reserved>; // 32MB for firmware
};
};Memory Region:
- Base:
0x3ed00000 - Size:
32MB(0x2000000) - Purpose: Firmware loading and RPU execution
Shared Memory Regions:
shm0(0x3ed80000, 16MB): APU-RPU shared memoryshm1(0x80010000, 512KB): PL-RPU shared memory (BRAM)
IPI Channels:
ipi_ch0(0xff300000): APU-RPU signalingipi_ch7(0xff340000): IPI with interrupt support (GIC SPI 29)
Message Passing:
msg0(0xff990000): APU to RPU0 message interface
PL Overlay Support:
ploverlay0(0x80000000): PL overlay control registerintc0(0x80001000): AXI interrupt controller for PL interrupts
All devices are exposed as generic-uio for userspace access.
Ethernet Interface:
- MAC address:
00:0a:35:00:00:00 - Interface renaming:
end0→eth0(via systemd service)
eMMC Support:
- Controller:
mmc@ff160000 - Status: Enabled for standalone boot option
This foundation wasn’t built just for convenience—it unlocks three major follow-up projects:
Vivado LED Control via PYNQ
Build custom PL designs and interact with them instantly in Python/Jupyter.
- Vivado LED Control via PYNQBuild custom PL designs and interact with them instantly in Python/Jupyter.
Vitis FreeRTOS on RPU with Remoteproc
Load, start, and stop real-time firmware directly from Linux.
- Vitis FreeRTOS on RPU with RemoteprocLoad, start, and stop real-time firmware directly from Linux.
Custom Linux Kernel Driver
Expose FPGA hardware to userspace via standard sysfs interfaces.
- Custom Linux Kernel DriverExpose FPGA hardware to userspace via standard sysfs interfaces.
Each of these builds on the same Yocto image, creating a smooth progression from experimentation → real-time control → production-ready drivers.
What started as a desire to “speed things up” ultimately became a full development ecosystem for the KR260: fast to iterate, flexible to extend, and powerful enough for both academic and industrial FPGA workflows.
This is now my standard foundation whenever I begin a new FPGA project—and I hope it helps others build faster, learn deeper, and push the KR260 beyond what comes out-of-the-box.
Resources- Repository: [GitHub Link]
- Xilinx Kria Documentation: https://xilinx.github.io/kria-apps-docs/
- PYNQ Documentation: https://pynq.readthedocs.io/
- Yocto Project: https://www.yoctoproject.org/
Built with:
- Xilinx Tools 2025.2
- Yocto Scarthgap (5.1)
- PYNQ 3.1.2
- XRT (Xilinx Runtime)
- meta-kria laye





Comments