The Efficiency Trap

The Hidden Cost of "Efficiency"

What do we actually lose when algorithms replace doctors, loan officers, and hiring managers — and why the people who bear that cost rarely appear in the spreadsheet?

There is a word that has become a kind of incantation in the technology industry. It is deployed at investor presentations, inserted into press releases, and offered — without irony — as a complete defense of systems that touch the most consequential moments of human life. The word is efficiency. And we have accepted it, for too long, as if it were a value rather than a measure.

The Résumé Filter

In 2018, Reuters reported that Amazon had quietly shelved an AI recruiting tool it had spent years developing. The system, trained on a decade of hiring data, had taught itself that maleness was a proxy for competence — penalizing résumés that included the word "women's" and downgrading graduates of all-women's colleges.

Amazon scrapped the tool. Most companies don't. Across corporate America, automated screening systems now review the majority of applications before a human being reads a single word. The pitch is straightforward: remove bias, increase throughput, reduce cost. The reality is more complicated.

"These systems don't remove bias," says Dr. Safiya Umoja Noble, whose research examines algorithmic discrimination. "They launder it. They take the biased decisions of the past, encode them in math, and return them to us wearing a lab coat." When a system is trained on historical hiring data, it learns not who is qualified but who has historically been hired — a dataset shaped by decades of discrimination it was supposedly designed to eliminate.

73% of large U.S. employers now use automated tools to screen or rank candidates before human review, per a 2023 Harvard Business School survey.

But there is a subtler loss that statistics don't capture. When a hiring manager reads a résumé, they can ask questions an algorithm cannot: Why is there a two-year gap? (A sick parent, a creative project, a layoff during a recession.) What does "team lead" mean in a context where titles are cheap? What does this person want, and can we offer it to them?

The interview — even the imperfect, bias-prone human interview — is also an opportunity for the candidate. They can make a case for themselves. They can be seen. An automated rejection email offers no such exchange. It is not a decision; it is a sorting.

The Credit Score's Cousin

In lending, algorithmic underwriting has existed for decades — the FICO score dates to 1989. But the new generation of AI-driven credit tools goes far beyond traditional metrics. Alternative data sources now include rent payment history, phone bill punctuality, social media activity, shopping patterns, and in some documented cases, the creditworthiness of your zip code's neighbors.

Proponents argue these tools expand credit access to people with thin credit files — the young, the recently immigrated, the previously unbanked. There is some evidence for this. There is also evidence of the opposite.

A landmark 2022 study by the National Community Reinvestment Coalition found that AI mortgage underwriting models rejected Black and Latino applicants at rates far higher than white applicants with similar financial profiles. The systems were not using race as a variable. They didn't need to. Zip code, school attended, and commute time were doing the same work with legal impunity.

40% higher denial rates for Black mortgage applicants using AI underwriting versus comparable white applicants, in NCRC's 2022 analysis of 2.4 million loan applications.

The old loan officer model was not just. It was shot through with personal bias, inconsistency, and outright discrimination. But it contained something the algorithm cannot replicate: the possibility of an explanation, a negotiation, an appeal. You could bring in your business plan. You could explain the divorce that cratered your credit. You could be a person, rather than a score.

When you are denied by a model, you are denied by no one. There is no one to argue with. There is no one to hold responsible. The decision simply arrives, laundered of accountability, and the path forward is unclear.

The Diagnostic Engine

Medicine may be the domain where the efficiency argument is most seductive — and most dangerous. Radiologists miss things. Dermatologists have bad days. Primary care physicians have twelve minutes per patient and seventeen open browser tabs. AI, the argument goes, is tireless, consistent, and increasingly accurate.

The evidence on AI diagnostic accuracy in controlled settings is genuinely impressive. In narrow, well-defined tasks — screening mammograms, flagging diabetic retinopathy, identifying certain skin lesions — machine performance rivals or exceeds expert human performance. This is not propaganda. It is real.

But clinical medicine is not a controlled setting. A chest X-ray is read in the context of a conversation. The patient who came in "for something minor" and mentioned, almost as an aside, that she hasn't been sleeping. The man who winced when he stood up from the waiting room chair. The teenager who showed up for a sports physical and couldn't make eye contact.

None of these observations are inputs. None of them appear in the data.

54% of U.S. hospitals now use at least one AI clinical decision-support tool, up from 29% in 2019 — with minimal standardized oversight of outputs.

A 2019 Science paper found that a widely deployed healthcare algorithm — used to manage care for 200 million patients annually — systematically underestimated the health needs of Black patients because it used past healthcare spending as a proxy for health need. Poorer patients had spent less. The algorithm concluded they needed less care.

What Efficiency Cannot Measure

The efficiency framing has a critical structural flaw: it measures what can be measured and ignores what cannot. Processing time, cost per decision, false positive rates — these are quantifiable. The experience of being seen, understood, and treated as a full human being by consequential institutions is not. So it does not appear in the model's objective function. And what doesn't appear in the objective function doesn't get optimized.

This is not an accident. It is a choice. When a company deploys an automated hiring system, it is choosing to optimize for throughput over the candidate's experience of the process. When a bank replaces a loan officer with an algorithm, it is choosing to optimize for consistency over the applicant's ability to make their case. When a hospital deploys a triage algorithm, it is choosing to optimize for cost over the patient's sense of being genuinely heard.

These are legitimate choices in some contexts. They are also choices that fall hardest on the people who were already most poorly served by the institutions they replace — who had the most to gain from a human being willing to look a little harder, listen a little longer, see past the number on the page.

The other problem with efficiency is that it assumes the current output is the right output, only faster. But institutions like hiring, lending, and healthcare are not just allocation mechanisms. They are social technologies — relationships between individuals and the structures of opportunity and care. Their value is partly in the doing, not just the result. A loan denial delivered with explanation and dignity is different from the same denial delivered as a server response. The difference is not inefficiency. It is humanity, and we are in the process of automating it away.

What Should Change

None of this argues for the status quo ante. Human loan officers discriminated. Human HR managers hired people who looked like them. Human doctors held — and still hold — dangerous biases about pain tolerance, drug-seeking behavior, and whose complaints deserve attention. The case against AI in these domains is not a case for uncritical faith in human judgment.

It is a case for something harder: for demanding that the tools we deploy be audited for disparate impact, for preserving meaningful avenues of explanation and appeal, for insisting that the efficiency gains flow to the people bearing the risk — not only to the institutions deploying the systems. It is a case for refusing the framing that because something is faster, it is better; because it is consistent, it is fair; because it can be measured, it can be trusted.

The hidden cost of efficiency is not hidden in the data. It is hidden in the people the data does not see.