From Radiologists to Lawyers, Experts Cash In on Booming AI Training Jobs

As tech firms hunt for human expertise, a soft U.S. job market is funneling academics into high-paying gig work

Man coding on a computer in a dimly lit office at night.

A new kind of gig economy is quietly taking shape — and this time the freelancers are not delivering food or driving passengers but teaching artificial intelligence how to think.

Across the United States, highly educated professionals are increasingly signing up for hourly work training AI systems, drawn by strong pay and flexible schedules at a time when traditional employment opportunities appear harder to secure.

Platforms specializing in matching experts with AI developers have become central to this trend. One San Francisco–based startup, Mercor, has reportedly engaged about 30,000 contract workers over the past year and is valued at roughly $10 billion. Its business model resembles that of marketplace platforms: it connects specialists to tech firms and keeps a share — around 30% — of what clients pay.

The assignments vary widely but share one goal: improve how machines reason. Workers may design questions that chatbots fail to answer, compare outputs from competing models, or explain which responses meet professional standards.

And the compensation can be striking. Radiologists are being offered up to $340 per hour to help develop diagnostic reasoning, while anesthesiologists may earn around $200. Even less specialized roles exist — from simulating call-center conversations with specific accents to analyzing basketball games — suggesting that nearly every field has knowledge AI companies want to absorb.

The average hourly rate across projects is said to be about $85.

For some participants, the work has quickly become financially meaningful. Matthew Simmons, a literature doctoral candidate who previously supported himself by teaching part time, now spends roughly 30 hours a week evaluating AI outputs across several platforms. He estimates his hourly earnings at about $65 — more than triple what he once made grading student essays — effectively doubling his income.

Despite the higher pay, Simmons still hopes to secure a full-time academic position with benefits. Yet the steady flow of AI assignments has made stepping away difficult. As he noted indirectly through his experience, the math is simple: when the hourly rate is attractive, logging off becomes harder than expected.

Others arrived via less planned routes. Ben Spangler, a PhD chemist whose startup lost funding in 2025, was approached by a recruiting platform after advertising consulting services online. He now tests AI systems for pharmaceutical applications, identifying questions they cannot answer reliably.

While the pay is somewhat below his previous consulting rates, Spangler values the flexibility and the chance to observe cutting-edge AI development firsthand. His assessment is pragmatic: the models are persuasive but far from flawless, often requiring expert scrutiny to detect invented sources or logical errors.

The surge in expert gig work reflects two overlapping forces.

First, AI developers increasingly need specialized knowledge as models expand into fields such as medicine, law, and finance. Early training often relied on lower-paid data labeling, frequently outsourced to developing countries. Today’s tasks demand deeper expertise, pushing companies to recruit professionals primarily from industrialized economies including the U.S., the U.K., and Canada.

Second, a cooling labor market has made many qualified workers more open to unconventional roles. Economic uncertainty and longer job searches have nudged unemployed, retired, or underemployed professionals toward platforms promising immediate income.

For AI companies, the arrangement offers efficiency. Rather than forming costly institutional partnerships, they gain direct access to the people who hold the knowledge they need.

Still, the setup raises an obvious question: are these experts effectively training their future replacements?

Opinions differ. Spangler argues that human oversight will remain essential, particularly in high-stakes areas such as drug development where accountability cannot be automated. Simmons is less certain but pragmatic, suggesting that immediate financial realities outweigh longer-term speculation.

One unexpected side effect has been professional skepticism. After months spent dissecting machine-generated answers, Simmons says he now approaches AI responses more cautiously — especially free tools that may prioritize speed over depth.

If nothing else, the phenomenon highlights a paradox at the heart of the AI boom: the smarter machines become, the more urgently they seem to require human teachers. Whether that dependence endures may determine not just the future of work — but who is still doing it.