College grads in ‘AI-proof’ careers like psychology and education are seeing negative returns on their degrees
As AI reshapes white-collar work, new research shows that some of the most popular graduate degrees are actually leaving holders worse off financially.
There’s a boom in the economy: economics papers on the souring prospects of the recent college graduate in the AI-era economy of the 2020s. Harvard economists Lawrence Katz and Claudia Goldin found in September 2025 that the college wage premium remains, but has barely moved since 2000, while the San Francisco Fed attributed that stagnation primarily to less demand for those workers, in a working paper shortly afterward. The World Economic Forum found earlier this year that AI skills now command a 23% wage premium versus only 8% for a bachelor’s degree in isolation. Dallas Fed economist J. Scott Davis may have made the biggest splash in February 2026 with a paper that found AI is simultaneously reducing entry-level hiring and raising wages for experienced workers in the same AI-exposed occupations.
But what about the college grads that intentionally got degrees in supposedly “AI-proof” disciplines, like psychology or education?
A new report released by the Postsecondary Education and Economic Research Center maps out the estimated payoff of a graduate degree. When factoring in the costs of a graduate degree—tuition and fees—some degree holders are actually coming out the other end with negative returns. The worst returns are for psychology graduate degrees, with a -8% cost-adjusted return, or the estimated change in lifetime income after accounting for the cost of attendance.
The report also found that clinical psychology—a specialized branch of psychology—offers -5% cost-adjusted returns. Social work and curriculum and instruction degrees also offer negative returns, according to the study. Other popular degrees, such as computer science, yield only a 6% return after adjusting for costs.
“If you’re thinking about graduate school, you want to get some information about what the earnings potential is coming out of the degree as well as the kinds of occupations and jobs it leads to,” Joseph G. Altonji, a professor of economics at Yale and co-author of the study, told Fortune.
Over the years, more and more students have hedged their bets on a graduate degree to boost their salaries. The percentage of Americans with a graduate degree grew from 31% in 1993 to 42% in 2022, according to the U.S. Census Bureau. But as AI threatens the future of white-collar work, Gen Z, the generation just entering the workforce, is being forced to break with traditional work norms as the technology sparks a white-collar reckoning.
Research from Anthropic last month revealed that AI is theoretically capable of performing the majority of tasks in white-collar fields, such as engineering, law, and business and finance. As the Census suggests, many are still turning to the post-graduate degree (but a growing number are also ditching college altogether). Yet even as AI threatens to take jobs, some of the roles considered relatively safe from automation offer little in the way of job security.
To calculate the estimates, researchers Altonji and co-author Zhengren Zhu, a professor at Vassar College, used administrative data from the Texas Education Research Center to develop causal estimates for 121 specific advanced degrees. The study moves beyond salary comparisons by accounting for a student’s outside options—the estimated earnings they would have achieved had they not pursued the graduate degree.
The hidden cost of going back to school
Students are increasingly questioning the value proposition of higher education. Aside from the threats of AI, some are finding it hard to justify even a four-year degree. The unemployment rate of recent college graduates has recently surpassed the unemployment rate for all workers, according to data from the Federal Reserve Bank of New York. But it’s also possible that the key motivation for many students entering a graduate program isn’t to boost their salary. Many could be looking to make a career pivot, for example.
To be sure, graduate degrees overall do on average increase students’ earnings by around 17%, according to the researchers. And even as AI threatens to overtake law and business jobs, law degree and MBA holders still make 41% and 13% in cost-adjusted returns, respectively—solid returns, though still a far cry from the 173% returns a doctor of medicine (MD) degree offers. The greater than double returns of the MD come even after factoring in the average $228,959 students of medicine must pay to earn the degree.
Engineering, one of the most vulnerable careers to automation, is already seeing relatively low returns. While the average annual earnings for all engineering graduates is six figures, the payoff is slim. Electrical and mechanical engineering graduates only see 4% cost-adjusted returns. For computer engineering, the cost-adjusted return is just 2%.
Of course, many heading into those master’s degrees often majored in the same fields in undergraduate degrees, which already have high average annual earnings, explaining the marginal gains observed in the study. Electrical and computer engineering graduates, for instance, earn over $82,000 annually before even starting their graduate programs, according to the study.
But Altonji said the payoff for those degrees could still be particularly high for those coming from humanities degrees. “The percentage gain in earnings is higher for those degrees,” he said. “It’s higher for people who come from some fields like say, English, or some of the humanities majors, some of the majors that are associated with lower earnings.”
Fortune Tech
https://fortune.com/2026/04/04/graduate-school-value-negative-returns-psychology-education-ai/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
research
How can Beijing attract top-tier Chinese AI professionals based abroad?
Beijing should shift its strategy and improve ways to attract and retain top Chinese AI professionals as America’s accelerating integration of artificial intelligence into military and national security systems puts such talent in a bind. As geopolitical tensions rise, many highly skilled Chinese researchers working at US tech and research institutions are confronting a painful dilemma, according to Dai Mingjie, a researcher at the Institute of Public Policy at the Guangzhou-based South China...
Anthropic says Claude subscriptions will no longer support OpenClaw because it puts an 'outsized strain' on systems
Why It Matters The decision by Anthropic to stop supporting OpenClaw for Claude subscriptions is significant because it highlights the challenges of integrating third-party tools with AI systems. According to a report from Business Insider, Anthropic cited the "outsized strain" that tools like OpenClaw put on their systems as the reason for this move. This strain is likely due to the additional computational resources required to support these tools, which can impact the overall performance and reliability of the AI system. The impact of this decision will be felt by users who rely on OpenClaw to enhance their experience with Claude subscriptions. OpenClaw's founder has already expressed disappointment, stating that cutting support would be "a loss." This reaction is understandable, given
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
![[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-earth-satellite-QfbitDhCB2KjTsjtXRYcf9.webp)
[R] Looking for arXiv cs.LG endorser, inference monitoring using information geometry
Hi r/MachineLearning , I’m looking for an arXiv endorser in cs.LG for a paper on inference-time distribution shift detection for deployed LLMs. The core idea: instead of monitoring input embeddings (which is what existing tools do), we monitor the statistical manifold of the model’s output distributions using Fisher-Rao geodesic distance. We then run adaptive CUSUM (Page-Hinkley) on the resulting z-score stream to catch slow drift that per-request spike detection misses entirely. The methodology is grounded in published work on information geometry (Figshare, DOIs available). We’ve validated the signal on real OpenAI API logprobs, CUSUM caught gradual domain drift in 7 steps with zero false alarms during warmup, while spike detection missed it entirely. If anyone with cs.LG endorsement is
![[D] KDD Review Discussion](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-wave-pattern-4YWNKzoeu65vYpqRKWMiWf.webp)
[D] KDD Review Discussion
KDD 2026 (Feb Cycle) reviews will release today (4-April AoE), This thread is open to discuss about reviews and importantly celebrate successful reviews. Let us all remember that review system is noisy and we all suffer from it and this doesn't define our research impact. Let's all prioritise reviews which enhance our papers. Feel free to discuss your experiences submitted by /u/BomsDrag [link] [comments]




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!