A11yShape Brings LLMs to Bear on Assisting Blind, Visually Impaired Developers with 3D Modeling
OpenAI's GPT-4o large language model sits at the heart of this system for describing OpenSCAD models for iterative development.
Researchers at the University of Washington, Purdue University, the Massachusetts Institute of Technology (MIT), the Hong Kong University of Science and Technology, Stanford University, the Universities of Washington, Michigan, and Texas at Dallas, working with NVIDIA, have developed a system for visually impaired developers to create, edit, and verify 3D models: A11yShape.
"Things [like 3D printing and circuit prototyping] are very challenging for blind users, especially when they are doing it alone," explains Liang He, senior author of the work, who was inspired by a blind graduate school classmate's struggles with 3D modeling tasks. "Every single time when he was working on his assignment, he had to ask someone to help him and verify the results. This is a first step toward a goal of providing people with visual impairments with equal access to creative tools, including 3D modeling."
The system is based on OpenAI's GPT-4o large language model (LLM), and starts with captured images of 3D models that are generated by OpenSCAD users — a step that, as it takes place in text, can be handled using screen readers, Braille displays, and other existing assistive technologies. The model then crunches the models into a token stream and statistically selects continuation tokens, which are presented the user as a best-effort description of the model — allowing for iterative development without sighted assistance.
"He used the first version of the system and gave us a lot of really good feedback, which helped us improve the system," He says of his classmate's input. "The next step is to try to support this process — this pipeline from 3D modeling to fabrication."
The team's work is available in the Proceedings of the 27th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '25) under open-access terms.