Apple Opens Access to Its On-Device Large Language Model, Integrates ChatGPT Into Xcode
New LLM-powered features come even as its own researchers warn that "reasoning models" may be hurtling towards collapse.
Apple has announced it is making its on-device Apple Intelligence large language model (LLM) available to macOS and iOS developers, as well as integrating LLMs into its Xcode development platform — even as its own researchers publish a paper suggesting "reasoning models" are little more than an illusion.
"Developers play a vital role in shaping the experiences customers love across Apple platforms," says Apple's Susan Prescott of the news, which was announced by the company at an event this week. "With access to the on-device Apple Intelligence foundation model and new intelligence features in Xcode 26, we’re empowering developers to build richer, more intuitive apps for users everywhere."
Large language models (LLMs), which encode vast quantities of often illegitimately-obtained training data into tokens then return the most statistically likely continuation tokens in response to a similarly-encoded user prompt, creating something in the shape of an answer but with no guarantees of factuality, are enjoying a major moment in the sun. While Apple's attempt to capitalize on this with Apple Intelligence and the promise of a smarter though yet-to-be-launched Siri assistant has proven a tough path, the company is hoping that making its "Apple Intelligence" system more accessible to third-party developers may prove the secret to success.
In macOS "Tahoe" 26, iOS 26, and iPadOS 26, Apple is to make its on-device LLM accessible to all through what it calls the Foundation Models framework. With this, Swift developers can access the model — which runs locally, without the need to send user data to a remote server — for guided content generation and tool calling. The Xcode 26 development platform, meanwhile, comes with direct integration for OpenAI's ChatGPT — which does not run locally, and requires all data to be transmitted to the company's remote servers — and the promise of compatibility with other third-party hosted LLMs.
While Apple was talking up the capabilities of so-called "reasoning models" at the event, though, its own researchers have published a paper (PDF) that warns that the marketing might not match the reality. "We found that LRMs [Large Reasoning Models] have limitations in exact computation," the researchers admit.
"They fail to use explicit algorithms and reason inconsistently across puzzles. Despite sophisticated self-reflection mechanisms, these models fail to develop generalizable reasoning capabilities beyond certain complexity thresholds. Standard LLMs outperform LRMs at low complexity, LRMs excel at moderate complexity, and both collapse at high complexity. [Our] insights challenge prevailing assumptions about LRM capabilities and suggest that current approaches may be encountering fundamental barriers to generalizable reasoning.
Apple's promised new features are available for beta testers through the Apple Developer Program now, with a public beta scheduled for next month and a general release for the fall.