Tinker Is a Hosted Fine-Tuning Service for Open Source LLMs

Tinker by Thinking Machines offers managed infrastructure to simplify AI training for researchers and developers.

Despite their flaws, large language models (LLMs) are often described as the next frontier of advanced computer science. The popular large language models, such as GPT-5 and Gemini 2.5 Pro, are closed source black boxes presenting only a chat interface and API to the public.

However, if LLMs are to be the “steam engine” of the 21st century, they should be more open and easier to customize.

Tinker by Thinking Machines is a Python-based API for fine-tuning open source LLMs, e.g. customizing a pre-trained model for specialized tasks. It is accompanied by an open source library, Tinker Cookbook, with a collection of examples and abstractions for customizing training environments.

The name is inspired by the TinkerToy Computer, a noughts-and-crosses playing computer made from Tinkertoys by MIT students in the 70’s.

Fine-tuning is taking a pre-trained, general-purpose machine learning model and training it on a smaller, specific dataset. For example, a general language model like GPT-5 can be fine-tuned to specialize in healthcare applications. Fine-tuning is used in natural language processing, computer vision, and speech recognition to create models tailored to particular use cases.

Although less demanding than building from scratch, fine-tuning a large language model is still labor and compute-intensive and is mostly used by well-funded research teams.

Tinker aims to change this by providing a managed fine-tuning service for AI researchers, developers, and technical teams with limited resources. Its users gain access to Thinking Machines’ internal training infrastructure, and the company handles the complexities of distributed computing. Tinker uses an efficient fine-tuning technique called LoRA (low-rank adaptation), which means multiple users can share the “same pool of compute.”

Low-rank adaptation (Source: IBM)

Tinker supports a range of large and small open-weight models, and the company plans to expand the lineup soon. The API is currently in private beta, and there is a waitlist for access. It is free to use for now, but the company adds that it will “introduce usage-based pricing in the coming weeks.”

Tinker isn’t the only fine-tuning platform and competes with alternatives such as Hugging Face’s Trainer and Google’s Vertex AI. It, however, stands out for its infrastructure abstraction, fine control via low-level primitives, and support for large mixture-of-expert models.

WIRED reports that beta users said Tinker was more powerful and user-friendly than similar tools. Also, research groups at Princeton, Stanford, Berkeley, and Redwood Research have used Tinker to train mathematical theorem provers and multi-agent systems.

Fine-tuning a model using Tinker requires that task-specific datasets be uploaded to Thinking Machines’ servers. The company says it will not employ user data in training its own models.

Tinker was created by Thinking Machines Lab, an artificial intelligence company founded by OpenAI’s former chief technology officer, Mira Murati. The company says it is building a future where everyone has access to the knowledge and tools to make AI work for their unique needs and goals.

There are still other bottlenecks, such as data curation and validation, that restrict the accessibility of large language models, but Tinker (and similar platforms) bring us one step closer to a time when anyone can build and deploy a custom AI chatbot or copilot agent.

hectoraisin

Freelance writer specializing in hardware product reviews, comparisons, and explainers

Latest Articles