For those that have used recent versions of AMD's PetaLinux Tools to develop embedded Linux images for their FPGAs and SoCs/SoMs, you've probably taken notice of the warning message displayed every time the 2025 version of the tools are sourced:
[WARNING] The PetaLinux toolset is scheduled for deprecation in the 2026.2 release. Users are advised to adopt AMD EDF and its Yocto Project based workflows.As I mentioned in my previous project post the process for this new Yocto-based workflow, Embedded Development Framework (EDF), needed its own dedicated post outside of the installation process for the rest of the AMD FPGA tools. This is mainly because there isn't a whole lot to install for EDF outside of the specific project you're working on, so it's best to go through the entire workflow for a given development board. And as you can tell from the title, I'm going through the EDF workflow targeting my Kria KV260 Vision AI Starter Kit.
Now if you're coming from a traditional Yocto project workflow background then the EDF workflow is literally just that with a bonus SDK that will help you do things like package wic images. However, if you're someone like me where you learned Yocto through using PetaLinux, then EDF initially feels like your bike just had the training wheels violently ripped off and you abruptly crashed into a row of garbage cans.
As a side note, I'm working on an Ubuntu 24.04 host PC, but I didn't come across anything that I think would be different on an Ubuntu 22.04 host (with one minor exception that is noted when it comes up). I make no promises for Ubuntu 20.04 or earlier though.
There are two main workflows for EDF:
- Get a base pre-built image running on the target development board then do all application and kernel development on target.
- Build an SDK for the embedded Linux image from the Yocto project to cross compile and do all development on a host PC.
While initial development can get up and running directly on target very quickly compared to building from the Yocto project on a host, it can become a convoluted process to rebuild everything across multiple targets or rebuild if the SD card/eMMC needs to be re-imaged. So eventually you'll want all of your application/kernel source code with supporting package dependencies built into a Yocto project to rebuild the embedded Linux image ready to go.
EDF on Target WorkflowStarting with the workflow directly on target, which is my Kria KV260, the first step is to download the pre-built boot binary for the K26 SoM and SD card image from the EDF downloads page.
The K26 boot binary is titled k26-smk-sdt_kria boot.bin and the SD card image is titled SD/Wic Image Kria Generic:
Flash the WIC image on an SD card either directly from the terminal or using a program like Balena Etcher:
One thing I found that I didn't like in the configuration of the pre-built WIC image is that it only allocates a little under 1GB of space on the SD card for the root file system which isn't enough space for application/kernel development.
I used Gparted to extend the size of the root filesystem EXT4 partition of the SD card after flashing the WIC image onto it with Balena, but this could also be done directly from the command line. I extended the partition to use the full 32GB of my SD card:
With the SD card ready to go, next is to flash the KV260 with the pre-built boot binary for the K26 SoM. I personally find that using the boot image recovery tool built into the flash of Kria development kits like the KV260 is the easiest way to do this. This process is outlined on the Kria Wiki here:
After the boot binary on the K26 is updated and the SD card installed, power up the KV260 and connect to its serial port with your application of choice. I'm using Putty. Kria SoMs enumerate 4 serial ports on a host PC, the second one is the serial terminal from the Zynq MPSoC processing system.
The username for the pre-built image is amd-edf and it will prompt you to create a new password for this user when powered up this first time.
I assume to keep the size of the download of the pre-built SD WIC image small, the necessary libraries for application and kernel development are not already installed. So that's the first thing to do after validating a network connection. I connect my KV260 directly to my router via Ethernet (we'll get to how to install a Wifi USB dongle in a future post).
The pre-built images for AMD FPGAs is a custom Linux distribution that uses the DNF package manager. On top of being able to install each desired package by hand, AMD also places relevant packages in groups based on a target functionality to allow users to install everything with a single command.
As a prime example, all of the packages required for application and kernel development have been grouped together in the self-hosted package group. So after running a system package update, install the self-hosted package group:
amd-edf:~$ sudo dnf update
amd-edf:~$ sudo dnf install packagegroup-self-hostedNow this is where you should be able to move on to the next step, but for the Kria boards I found a small bug in the organization of the packages in the EDF software repositories. The URL of the software repositories for the v25.11 version of the AMD Yocto project for the Kria ZynqMP generic architecture is https://edf.amd.com/sswreleases/amd-edf/25.11/generic/rpm/rpm_latest/kria_zynqmp_generic/
I found the opkg-arch-config package is missing from this repository for the Kria ZynqMP generic architecture:
While being present in the repositories of the rest of the architectures like the ZynqMP generic, Cortex A53, etc.
Because opkg-arch-config is a dependency for other packages in the self-hosted package group, I got the following error with trying to install it on my KV260 initially:
This is what lead me to digging around in the EDF software repositories in a browser to notice the opkg-arch-config package was missing from the repository for the Kria ZynqMP generic architecture.
After some further digging, I ended up finding the opkg-arch-config package in a different repository for the Kria ZynqMP generic architecture with the URL https://edf.amd.com/sswreleases/rel-v2025.2/generic/rpm/kria_zynqmp_generic/
So I simply used wget to download it from that URL instead and manually installed the opkg-arch-config package:
amd-edf:~$ wget https://edf.amd.com/sswreleases/rel-v2025.2/generic/rpm/kria_zynqmp_generic/opkg-arch-config-1.0-r0.0.kria_zynqmp_generic.rpm
amd-edf:~$ sudo rpm -i opkg-arch-config-1.0-r0.0.kria_zynqmp_generic.rpmThen I was able to install self-hosted package group as expected:
amd-edf:~$ sudo dnf install packagegroup-self-hostedBuild Applications on TargetFor application development on target, source files can be edited directly on target or transferred over then compiled using a Makefile workflow.
As an example that will also verify the installation of the self-hosted package group, clone one of the example pre-built applications from gnu.org:
amd-edf:~$ wget https://ftp.gnu.org/gnu/hello/hello-2.12.tar.gz
amd-edf:~$ tar -xf hello-2.12.tar.gz
amd-edf:~$ cd hello-2.12After extracting run the configure script to generate the makefile, run make, and install the application:
amd-edf:~/hello-2.12$ ./configure
amd-edf:~/hello-2.12$ make -j8
amd-edf:~/hello-2.12$ sudo make installOnce installed, run the application:
amd-edf:~/hello-2.12$ helloBuild Kernel Modules on TargetFor kernel development on a target, the kernel-devsrc package provides the entire kernel source tree. This packages installs the sources in /usr/src/kernel:
amd-edf:~$ sudo dnf install kernel-devsrcOnce the kernel source tree is installed, export its version as the LOCALVERSION variable to the environment:
amd-edf:~$ export LOCALVERSION="-$(uname -r | cut -d'-' -f3)"
amd-edf:~$ echo $LOCALVERSIONThen export the location of the kernel source tree as the KERNEL_SRC variable to the environment:
amd-edf:~$ export KERNEL_SRC=/usr/src/kernelChange directories into the kernel source tree directory and run the makefile to return the release version number of the kernel source:
amd-edf:~$ cd $KERNEL_SRC
amd-edf:/usr/src/kernel$ make kernelreleaseMake sure the above output matches the output of the uname -r command which outputs the kernel release version number of what's currently running on the target:
amd-edf:/usr/src/kernel$ uname -rOnce validating the kernel source tree version matches the kernel version running on the target, run the make file to prepare the modules within the source for development:
amd-edf:/usr/src/kernel$ sudo -E make modules_prepareAt this point everything is ready to go for writing your own source code for kernel modules on the target or transferring over. A good example for the code structure of a kernel module is the Xilinx HDMI module.
Change directories back to the home directories and clone the Xilinx hdmi21-modules repository. Once cloned, build the module and install it:
amd-edf:/usr/src/kernel$ cd ~
amd-edf:~$ git clone https://github.com/Xilinx/hdmi21-modules
amd-edf:~$ cd hdmi21-modules/ amd-edf:~/hdmi21-modules$ make -j8
amd-edf:~/hdmi21-modules$ sudo -E make modules_installRun the depmod command to link the newly installed kernel modules to any other kernel modules it might be dependent on:
amd-edf:~/hdmi21-modules$ sudo depmodEDF on Host PC WorkflowThe workflow on target honestly isn't anything new for AMD boards, especially the Kria boards that have always had pre-built images available for download on their wiki page. The big change is the workflow on a host PC.
Unbeknownst to me, that despite PetaLinux being Yocto based, it had been spoiling me by setting up each underlying Yocto project, giving it the nice naming convention I've grown attached to, and perfectly compartmentalizing everything via the petalinx-create command.
The main reason EDF is confusing when transitioning from PetaLinux is that the entirety of the Yocto project handling as been cut out from the EDF SDK. So its up to you to clone the base Yocto project from AMD's repositories, configure/build it, then the EDF SDK only comes in for operations like merging multiple files into a WIC image to flash an SD card with. Basically the EDF SDK is mainly just what the petalinux-package set of commands were in PetaLinux.
When working with a development board that has board support packages/files like the Kria KV260 does, my first goal is just to build the Yocto project for that target as-is before modifying it. This means that if my KV260 doesn't boot with the image I build at first, then I'm only troubleshooting how I setup/built my Yocto project and not both the Yocto project setup and whatever custom options I've added to the image (ie - new device tree nodes, kernel modules, etc.). So the following steps will be outlining how to manually build the equivalent SD card WIC image and boot binary we downloaded from AMD's website.
Prep Host EnvironmentThe following steps are the only "install" needed for the EDF tools themselves. To start I found that my Ubuntu 24.04 host was missing a package dependency after my main installation from the previous post:
~$ sudo apt install lz4Then create a directory to install the EDF tools into. Previously, I had grouped this in the root directory with the corresponding Vivado/Vitis installation, but it seems there are separate minor version updates happening with EDF and they are better suited to be somewhere in the user home directory:
~$ mkdir -p amd-edf-sdk-cortexa72-cortexa53-amd-cortexa53-mali-common_v25.11You'll notice a very specific naming convention for my desired installation directory here. This is because there is a version of EDF specific to each architecture.
The K26 SOM on my KV260 kit has Cortex A53 ARM cores with a Mali-400 MP2 GPU, so I'm installing the cortexa53-mali-common version of the EDF tools (the K24 SOM is the same architecture minus the video CODEC).
Make the install script executable:
~$ cd ./Downloads
~/Downloads$ chmod +x ./amd-edf-glibc-x86_64-meta-edf-app-sdk-cortexa72-cortexa53-amd-cortexa53-mali-common-toolchain-25.11\ development-S11151020.shThen run the script to install the EDF tools:
~/Downloads$ ./amd-edf-glibc-x86_64-meta-edf-app-sdk-cortexa72-cortexa53-amd-cortexa53-mali-common-toolchain-25.11\ development-S11151020.shSpecify the desired install location which is the ~/amd-edf-sdk-cortexa72-cortexa53-amd-cortexa53-mali-common_v25.11 directory I created earlier.
While you could use the git tools for managing the process of cloning the AMD fork of the Yocto project, the repository management tool developed by Google, repo, seems to be smoother to use so far and seems to be the recommendation in all AMD official documentation.
Install repo from Google's repositories using curl:
~$ curl https://storage.googleapis.com/git-repo-downloads/repo > repo
~$ chmod a+x repo
~$ echo "Create a user specific ~/bin directory if one does not exist"
~$ mkdir ~/bin
~$ mv repo ~/bin/Then add it to the path for the system environment:
~$ gnome-text-editor ./.bashrc
PATH="$PATH:~/bin"Reopen the terminal and validate the repo command is now available:
~$ repo --help
usage: repo COMMAND [ARGS]
repo is not yet installed. Use "repo init" to install it here.
The most commonly used repo commands are:
init Install repo in the current working directory
help Display detailed help on a command
For access to the full online help, install repo ("repo init").
Bug reports: https://issues.gerritcodereview.com/issues/new?component=1370071Again, the steps in this section are only for the installation of the EDF tools and only have to be run the one time.
Create New Yocto ProjectFinally, it's time to create a new Yocto project targeting the KV260! Going back to my initial analogy of the transition away from the PetaLinux tools feeling like having the training wheels violently ripped off my bike, here is where you start to see what I meant.
Previously, to create a new Yocto project using the PetaLinux tools it was a simple one line command where you pointed to the BSP of the board being targeted with the -s flag and the desired name of the project directory with the -n flag:
~$ petalinux-create -t project -s /<path to bsp>/kv260.bsp -n plnx_prj_nameThis one command created the Yocto project directory and downloaded a copy of the AMD fork of the Yocto project for the specific version tag being used. Now it's up to the user to follow this Yocto workflow manually.
Start by creating the desired project directory:
~$ mkdir -p ./kria_kv260_edf_prjs/yocto_prj/edf_v25.11
~$ cd ./kria_kv260_edf_prjs/yocto_prj/edf_v25.11Change directories into this folder and initialize it with the fork of the AMD Yocto project pointing to the specific EDF version tag being used (which is v25.11 since that is the version of the SDK installed in the previous step):
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11$ repo init -u https://github.com/Xilinx/yocto-manifests.git -b refs/tags/amd-edf-rel-v25.11 -m default-edf.xmlThen actually pull/download the source files using the repo sync command:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11$ repo syncOnce all of the sources of the Yocto project are downloaded, source the EDF SDK, initialize the build environment, and change directories into the build directory the environment script just created:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11$ source ~/amd-edf-sdk-cortexa72-cortexa53-amd-cortexa53-mali-common_v25.11/environment-setup-cortexa72-cortexa53-amd-linux
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11$ source edf-init-build-env
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11$ cd ./buildSince I'm using the KV260 without any custom HDL implemented in the programmable logic at the moment, I don't have to worry about importing my custom hardware platform XSA file. In the event you do have a custom XSA to build on, there is still an EDF SDK command to help with this.
In PetaLinux importing an XSA looked like this:
~$ petalinux-config --get-hw-description ./<path to xsa>The equivalent EDF command is gen-machine-conf, and the actual XSA is specified instead of just its location, but I found that this command does reply on the Xilinx Command Line (XSCT) tools so the Vivado tools need to be sourced to the environment as well:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ source /tools/Xilinx/2025.2/Vivado/settings64.sh
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ gen-machine-conf --hw-description ./<path to xsa>/custom_kv260_design.xsaIt's important to take notice that a new machine name variable will be created for your custom hardware that you need to use when building the rest of the components in the project.
In this case, the machine name zynqmp-generic-xck26 is what I would use to build components like the root filesystem image targeting my custom XSA. However, I'm going to be using the generic Kria and KV260 machine names in this write-up as I found that this new machine name needs to be added to all of the.conf files throughout the meta-kria layer in order to be compatible and build properly so that'll need to be its own write-up since I was trying to keep this one somewhat generic and not too Kria specific.
Also, If you were wondering, this is why you needed to source the EDF SDK itself in the step before last after downloading the Xilinx fork of the Yocto project so the gen-machine-conf command would be available.
But I'm super curious about the gen-machine-conf command relying on XSCT since there have been notes/warnings that it the XSCT tools are being depreciated as well. I guess that's a future me problem to figure out.
As I mentioned previously, the PetaLinux tools took care of a lot of things under the hood for you in the Yocto workflow. We already looked at creating the Yocto project and pulling in the hardware XSA, so the next example I wanted to cover was creating a new application.
Note: this section is optional and just an example template for how to add your own custom applications, you can skip to the next step of creating the boot binary if desired.
Once again, the petalinux-create command would take care of everything like creating the new layer, adding the layer to the project, and creating the bare bones files for everything like the bitbake and source code files:
~$ petalinux-create -t apps --template install --name hello-world --enableWhereas now it all is done by hand, starting with creating the layer for custom applications and adding it to the Yocto project:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ bitbake-layers create-layer ../sources/meta-myapplications
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ bitbake-layers add-layer ../sources/meta-myapplicationsThen create the directory structure for the recipe that is the specific application being added (hello world as an example):
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ mkdir -p ../sources/meta-myapplications/recipes-hello-world/hello-world/files Add the hello-world.c and Makefile to ../sources/meta-myapplications/recipes-hello-world/hello-world/files
Where hello-world.c looks like this:
#include <stdio.h>
int main(void){
printf("Hello, World!\n");
return 0;
}And the Makefile looks like this:
all: hello-world
hello-world: hello-world.c
\$(CC) \$(CFLAGS) \$(LDFLAGS) \$^ -o \$@
install:
install -d \${DESTDIR}\${BINDIR}
install -m 0755 hello-world \${DESTDIR}\${BINDIR}
uninstall:
\${RM} \${DESTDIR}/hello-world
clean:
\${RM} hello-worldThen create the bitbake file for the hello-world app's recipe in ../sources/meta-myapplications/recipes-hello-world/hello-world/ that points the build system to hello-world.c and Makefile. The bitbake should be titled something like hello-world_0.1.bb and look like this:
FILESEXTRAPATHS:prepend := "${THISDIR}/files:"
LICENSE = "CLOSED"
LIC_FILES_CHKSUM = ""
SRC_URI = "file://hello-world.c\
file://Makefile\
"
S = "\${WORKDIR}"
do_compile (){
oe_runmake
}
do_install (){
oe_runmake install DESTDIR=\${D} BINDIR=\${bindir}
}Then build the application by itself to validate it can build successfully:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ bitbake hello-worldAfter validating the new application recipe builds successfully, append the following line to the ../sources/meta-myapplications/conf/layer.conf file in order for it to be built into any new root filesystem image:
IMAGE_INSTALL:append = \" hello-world\"Configure Kernel and U-Boot OptionsThankfully, the ASCII GUIs for confguring which kernel modules and u-boot options are desired are still the same as they were in PetaLinux.
To launch the kernel configuration ASCII GUI:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ MACHINE=kria-zynqmp-generic bitbake -c menuconfig linux-xlnxTo launch the u-boot configuration ASCII GUI:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ MACHINE=kria-zynqmp-generic bitbake -c menuconfig u-boot-xlnxSince I'm building the default Kria images I didn't need to make any changes to the kernel or u-boot configurations.
Configure Root FilesystemThe ASCII GUI for configuring which package dependencies are built into the root filesystem is gone though, so you just have to specify them in the ./build/conf/local.conf file using the IMAGE_INSTALL variable:
IMAGE_INSTALL = "packagegroup-self-hosted"Or they can be specified in the bitbake that specifies how to build the image. For example, to build the generic Kria image (kria-image-full-cmdline.bb) in the same IMAGE_INSTALL variable:
Once any custom layers and recipes have been added or built, the next step is to create the boot binary for the target board. Which is my KV260 for this project, so the machine will be the K26 SOM (k26-smk-sdt) and since all of the Kria SOMs boot from QSPI, the bitbake file to call is kria-qspi:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ MACHINE=k26-smk-sdt bitbake kria-qspiThis is another point where I found some disconnect between the documentation and the configuration of the Yocto project. According to the Kria wiki, there are separate machine names for the Kria SOMs versus each of their development kits (the KV260 and the KR260).
The K26 SOM should have the machine name of k26-sm, the KV260 should be k26-smk-kv, and the KR260 should be k26-smk-kr. However I got the following error when trying to use the k26-smk-kv or k26-smk-kv-sdt machine name:
ERROR: Nothing PROVIDES 'virtual/imgsel' (but /home/whitney/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/sources/poky/../meta-kria/recipes-bsp/kria-qspi/kria-qspi.bb DEPENDS on or otherwise requires it)
imgsel PROVIDES virtual/imgsel but was skipped: The expected file /home/whitney/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build/tmp-k26-smk-kv-cortexa53-fsbl/deploy/images/k26-smk-kv/image-selector-k26-smk-kv.bin is not available.
Set IMGSEL_FILE to the path with a precompiled IMGSEL binary.
ERROR: Required build target 'kria-qspi' has no buildable providers.
Missing or unbuildable dependency chain was: ['kria-qspi', 'virtual/imgsel']The only machine name for any K26 SOM development platform that I found to be compatible with the kria-qspi bitbake to create the Kria boot binary was k26-smk-sdt. The resulting boot.bin file it outputs appears to work just fine on both my KV260 and the KR260 so far, and there is a high possibility that this is still an error on my part so I will continue to update here if needed.
Once the build is complete, the boot binary boot.bin file can be found in the ./build/tmp/deploy/images/<machine name> output directory, which is ./build/tmp/deploy/images/k26-smk-sdt in my case.
Flash it onto the KV260 using the same steps outlined above for the pre-built boot binary.
Build Linux ImageWith the boot binary built, the next step is to build the root filesystem SD card image. For any of the Kria SOMs, the machine name to use is kria-zynqmp-generic and the Kria SOMs also have their own dedicated bitbake called kria-image-full-cmdline:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ MACHINE=kria-zynqmp-generic bitbake kria-image-full-cmdlineThe kria-zynqmp-generic machine name give this gross warning message but is the only config that works the same as the pre-built image.
WARNING: The kria-zynqmp-generic machine is intended to be included by other machines, it should not be used by itself. For a non-machine, SoC specific filesystem, please use one of the common machines defined in meta-xilinx-core.I also confirmed it is the same architecture being used in the pre-built image because I noticed that the DNF package manage echoed back the architecture when updating the system:
Again, this might be a flaw in my workflow here but I tested the boot binary I generated here on both my KV260 and KR260 and haven't found any issues.
Again, once the build is complete the root filesystem outputs and WIC SD card image can be found in the ./build/tmp/deploy/images/<machine name> output directory, which is ./build/tmp/deploy/images/kria-zynqmp-generic in my case.
And the image can be flashed onto an SD card using the same steps as above for the pre-built image.
Now I did have to use my same work around for installing the opkg-arch-config package since that is an issue with the software repository itself.
I did also confirm the Kria system monitor webserver application that's built into the image via the kria-image-full-cmdline bitbake also works as expected. When the Kria is booting, it'll output the static address and port number the webserver is running on right before giving the user login prompt (as seen in last screenshot).
So now that I'm satisfied the image I'm building via the Yocto project works as expected on my KV260 I feel confident in moving on to custom application development using this Yocto project. Again, I think it's important to have a Yocto project that can output an image with everything build into it as needed versus having to reinstall everything on the target once you get past a certain point in development.
Build SDK of Target Linux ImageThe final thing I wanted to cover in this write-up before it gets too long is how to generate the sysroot of the root filesystem of your target in order to use the Vitis IDE for application development and be able to take advantage of the application templates it has.
The machine name needs to match whatever was used to generate the root filesystem and WIC SD card image, the only difference is that the application SDK bitbake (meta-edf-app-sdk) is called instead:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ MACHINE=kria-zynqmp-generic bitbake meta-edf-app-sdkThe compressed sysroot is then available in ./build/tmp/deploy/sdk along with the script to install it in whatever desired location for use by a vitis workspace:
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build$ cd ./tmp/deploy/sdk
~/kria_kv260_edf_prjs/yocto_prj/edf_v25.11/build/tmp/deploy/sdk$ ./amd-edf-glibc-x86_64-meta-edf-app-sdk-cortexa72-cortexa53-kria-zynqmp-generic-toolchain-25.11+development.sh -dir <path to install dir>In order to develop Linux applications in Vitis, a hardware XSA is required to create the plarform component with. Since this sysroot was created using the generic Kria image, I just created a Vivado project targeting the KV260 that only had the ZynqMP Processing System IP instantiaed and generated an XSA file to match the kria-zynqmp-generic machine:
Hopefully this was helpful to at least get started with the new EDF and Yocto workflow. I will be covering a lot more, particually on the Kria boards but I also want to show this process on a Zynq-7000 board as well. Let me know what other boards would be of interest for a EDF tutorial.








Comments