stmnkClaudia M
Published © Apache-2.0

Code LLM workflow for education

The project is an exploration on how to best use the RadeonW7900 in a code LLM training workflow designed for interrogative generation tasks

IntermediateWork in progress10 hours73
Code LLM workflow for education

Things used in this project

Hardware components

AMD Radeon Pro W7900 GPU (48 GB VRAM)
Awarded through the hardware application process. Used for model training pipelines and model evaluation pipelines. Working together with the ROCm 6.0 library and PyTorch, Transformers, etc. machine learning software.
×1
AMD Radeon Pro VII (16 GB VRAM)
Already owned GPU, used alongside the flagship W7900 GPU, working without any issues while performing ML/LLM training and inference tasks, with ROCm 5.7 (the last version that supports this card). As the GPU architecture is not the same as W7900 experimenting with data, model and pipeline parallelism was not possible.
×1
ASUS ProArt B650-Creator Motherboard
Already purchased motherboard. Tested with AMD Radeon Pro VII on the secondary PCI slot. Hopefully will acomodate the flagship Radeon W7900 on the main PCI slot (if/when received). Working without any issues while performing ML/LLM training and inference tasks.
×1
AMD Ryzen 5 (7000 series) CPU
Already purchased CPU, tested with the motherboard and Pro VII GPU and working with the ROCm library. Working without any issues while performing ML/LLM training and inference tasks.
×1
Kingston Fury RAM 64GB DDR5 RAM
Already purchased RAM, tested with the motherboard, CPU, and Pro VII GPU, working without any issues while performing ML/LLM training and inference tasks.
×1

Software apps and online services

AMD ROCm
Ubuntu 22.04 LTS OS from Canonical
PyTorch
Transfromers

Story

Read more

Schematics

Diagramatic workflow

Code

Training code LLM for education

Workflow for training code LLM for education: https://gitea.com/stmnk/qamd.git

Credits

stmnk
1 project • 0 followers
Claudia M
1 project • 0 followers

Comments