Friday, October 3, 2025

Trending

Related Posts

Thinking Machine lab launch its 1st AI product ‘Tinker’

Former OpenAI CTO Mira Murati’s Thinking Machines Lab (TML) has unveiled Tinker, its highly anticipated first product: a powerful, user-friendly API designed to simplify and accelerate the fine-tuning of large language models (LLMs). Launched on October 1, 2025, Tinker abstracts away the complexities of distributed training, allowing researchers, developers, and businesses to experiment with custom models using standard Python code. With TML’s $2 billion seed funding at a $12 billion valuation—led by Andreessen Horowitz—this release positions the San Francisco-based startup as a key player in the AI customization space, challenging tools from Hugging Face and OpenAI.

For AI researchers, startups building specialized models, and enterprises seeking tailored AI solutions, Tinker democratizes access to frontier-level fine-tuning, enabling tasks like mathematical theorem proving or chemistry reasoning without infrastructure headaches. As TML’s team—stacked with ex-OpenAI talent like John Schulman—aims to “extend the number of important operations we can perform without thinking,” Tinker could accelerate innovation in a $100 billion AI market. Let’s explore its features, technical underpinnings, and broader implications.

Tinker’s Core Features: Simplicity Meets Frontier Power

Tinker is a managed service running on TML’s internal GPU clusters, handling scheduling, resource allocation, and failure recovery so users focus on data and algorithms. It supports LoRA (Low-Rank Adaptation) for efficient fine-tuning, sharing compute across runs to cut costs by up to 50% compared to full retraining.

Key capabilities:

  • Python-Native API: Low-level primitives like forward_backward and sample for custom loops; switch models (e.g., small to large) with a single string.
  • Distributed Training: Abstracts GPU orchestration; scales from small experiments to large-scale tuning without setup.
  • Open-Source Cookbook: A GitHub library with modern post-training methods (e.g., RLHF, DPO) running atop the API—free for community contributions.
  • Beta Access: Early users include Princeton (theorem provers), Stanford (chemistry models), and Redwood Research; public waitlist open.

Murati emphasized: “Tinker brings frontier tools to researchers, offering clean abstractions for writing experiments while handling distributed training complexity.” Early testers praise its 90% algorithmic control with 90% less infrastructure pain.

Technical Edge: LoRA Efficiency and Custom Kernels

TML’s research—published alongside the launch—demonstrates LoRA matching full fine-tuning performance with far less compute, validated on models up to 671 billion parameters. Tinker leverages custom kernels in TileLang (prototyping) and CUDA (production) for speed, with paged indexers minimizing memory overhead.

Benchmark highlights from TML’s report:

  • Efficiency: 50%+ compute reduction for long-context tasks vs. baselines.
  • Performance Parity: Matches V3.1-Terminus scores in math/coding while enabling novel methods like reinforcement learning baselines.
  • Scalability: Handles 160K token contexts on adapted hardware, ideal for document analysis or code gen.

Schulman, TML’s chief scientist, called it “the infrastructure I’ve always wanted,” quoting Alfred North Whitehead on civilization’s progress through automation.

The Bigger Picture: TML’s Ambitious Start in a Crowded AI Landscape

Founded in February 2025 as a public benefit corporation, TML quickly assembled 30+ ex-OpenAI, Meta, and Mistral talent, including Jonathan Lachman and Andrew Tulloch. Advised by OpenAI’s Bob McGrew and Alec Radford, it secured $2 billion from a16z, NVIDIA, Accel, and even Albania’s government ($10 million). Structured with Murati’s majority voting rights, TML focuses on “making AI systems more widely understood, customizable, and capable.”

Tinker’s timing—pre-OpenAI’s GPT-5—targets the fine-tuning boom, where 70% of enterprise AI use cases require customization. It competes with Hugging Face’s AutoTrain ($10-50K runs) and Replicate’s API, but TML’s managed clusters and open cookbook offer a researcher-first edge.

Implications: Democratizing AI Customization

For researchers, Tinker lowers barriers to frontier experiments, potentially spawning breakthroughs in RLHF or domain-specific models. Startups/Enterprises gain affordable tuning (under $0.28/M tokens via API), accelerating custom AI without $10M+ infra. Investors see validation of TML’s $12B valuation, with vibe-coding tools like Anything hitting $2M ARR in weeks signaling the trend.

Challenges: Vetting for misuse (e.g., harmful models) and scaling access beyond betas. As TML publishes on neural network maintenance, Tinker could evolve into a full platform.

Conclusion: Tinker’s Tune-Up for AI’s Future

Thinking Machines Lab’s Tinker launch is a masterstroke: A flexible API turning fine-tuning from drudgery to delight, backed by $2B and OpenAI alumni. In a world of black-box models, it empowers hackers and PhDs alike—watch for the waitlist flood and next innovations. wired

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles