I have an open secret about fine-tuning models.

I predict most enterprises will independently rediscover this by the end of the year.

First, some context. I’m an early adopter:

  • I’ve been using GitHub Copilot since July 2021.
  • I started using ChatGPT on November 30, 2022.
  • I experimented heavily with fine-tuning models in 2023 and 2024. For most practical use cases, it was a waste of time and money.
  • I’ve been using Claude Code since February 2025.

Before that, I spent years building ML infrastructure on cross-functional R&D teams:

  • At Magic Leap from 2016 to 2020, we built sparse maps, dense maps, and object recognition pipelines.
  • At Waymo from 2020 to 2022, we built labeling pipelines and planner evaluation test sets.

That work took teams of data scientists, researchers, infra engineers, and months of coordination.

Today, Tinker by ThinkingMachines is a literal game changer.

It’s an easy-to-use API that any solid engineer can learn quickly. One person can now do in a weekend what used to take entire teams weeks or months.

The real advantage was never fine-tuning. It’s proprietary data.

Tinker makes that data usable. Queryable. Composable. Iterative.

If you’re asking where to look for what’s next in AI right now, this is it: [Tinker](https://thinkingmachines.ai/tinker/

The frontier isn’t bigger models. It’s turning your data into leverage.