Can We Ride the GenAI Wave Without Getting Subsumed by It? – The Health Care Blog

By DAVID SHAYWITZ

“There are decades where nothing happens; and there are weeks where decades happen,” said Lenin, probably never. It’s also a remarkably apt characterization of the last year in generative AI (genAI) — the last week in particular — which has seen the AI landscape shift so dramatically that even skeptics are now updating their priors in a more bullish direction.
In September 2025, Anthropic, the AI company behind Claude, released what it described as its most capable model yet, and said it could stay on complex coding tasks for about 30 hours continuously. Reported examples including building a web app from scratch, with some runs described as generating roughly 11,000 lines of code. In January 2026, two Wall Street Journal reporters who said they had no programming background used Claude Code to build and publish a Journal project, and described the capability as “a breakout moment for Anthropic’s coding tool” and for “vibe coding” — the idea of creating software simply by describing it.
Around the same time, OpenClaw went viral as an open-source assistant that runs locally and works through everyday apps like WhatsApp, Telegram, and Slack to execute multi-step tasks. The deeper shift, though, is architectural: the ecosystem is converging on open standards for AI integration. One such standard called MCP — the “USB-C of AI” — is now being downloaded nearly 100 million times a month, suggesting that AI integration has moved from exploratory to operational.
Markets are watching the evolution of AI agents into potentially useful economic actors and reacting accordingly. When Anthropic announced plans to move into high-revenue verticals — including financial services, law, and life sciences — the Journal headline read: “Threat of New AI Tools Wipes $300B Off Software and Data Stocks.”
Economist Tyler Cowen observed that this moment will “go down as some kind of turning point.” Derek Thompson, long concerned about an AI bubble, said his worries “declined significantly” in recent weeks. Heeding Wharton’s Ethan Mollick — “remember, today’s AI is the worst AI you will ever use” — investors and entrepreneurs are busily searching for opportunities to ride this wave.
Some founders are taking their ambition to healthcare and life science, where they see a slew of problems for which (they anticipate) genAI might be the solution, or at least part of it. The approach one AI-driven startup is taking towards primary care offers a glimpse into what such a future might hold (or perhaps what fresh hell awaits us).
Two Visions of Primary Care
There is genuine crisis in primary care. Absurdly overburdened and comically underpaid, primary care physicians have fled the profession in droves — some to concierge practices where (they say) they can provide the quality of care that originally attracted them to medicine, many out of clinical practice entirely. Recruiting new trainees grows harder each year.
What’s being lost is captured with extraordinary power by Dr. Lisa Rosenbaum in her NEJM podcast series on the topic.
In a companion essay, Rosenbaum documents the measurable consequences when patients lose a primary care physician: a rise in mortality, emergency room visits, and hospitalizations, all in proportion to the relationship’s duration — suggesting, as she writes, “that the relationship itself conferred health benefits.” Worse, more than three quarters of patients never form a new PCP relationship after losing one.
But Rosenbaum’s deepest concern isn’t statistical. It’s about what she calls the “good doctor” phenotype — not a skill set but a style. She describes a physician whose hallmark was assuming responsibility for the totality of his patients’ problems. When Rosenbaum was caring for one of his hospitalized patients, the patient insisted she update the doctor, explaining simply: “He will want to know.” For Rosenbaum, having your patients intuit that you would want to know — far more than any quality metric — constitutes the essence of being a good doctor. A “culture without a vision of the good doctor,” she warns, “is a profession without a soul.”
Her darkest worry: the system may morph into “some artificial-intelligence-enhanced triage system devoid of a relational core.”
Which is almost exactly what physician-entrepreneur Muthu Alagappan, co-founder of Counsel Health, aspires to deliver — for the sake of patients. His starting point: 100 million Americans don’t have a relationship with a doctor, good or otherwise. The relational ideal Rosenbaum celebrates is already inaccessible to vast swaths of the population.
At Counsel Health — recently backed by a $25M Series A from GV and Andreessen Horowitz — AI handles the upfront information gathering and initial clinical reasoning, functioning, as Alagappan puts it, like “an extremely smart medical resident that is reasoning along with them, serving up the plan and allowing them to approve or deny in a single click.” Doctors see 15 to 20-plus patients per hour. The vision: primary care visits costing less than a dollar.
As Alagappan sees it “It’s hard to fathom a cognitive aspect of the practice of medicine in primary care that a technology system is just not better suited to do than the human brain.”
He acknowledges that humans may still be necessary for pesky, hands-on tasks like wrapping an ankle or administering a vaccine, but beyond these, he seems to believe, the future belongs to the machines. He anticipates “regulation will ease and improve so that the AI can do more and more.”
In Utah, the approach pursued by a startup called Doctronic suggests such regulatory change may be closer than we think. The company’s AI prescribes renewals without a physician in the loop for 190 routine medications, at $4 per script — with a malpractice insurance policy covering the AI system itself, and escalation and oversight safeguards. Expansion is already contemplated to states like Texas, Arizona and Missouri, with a national roll-out under consideration as well.
Who’s in charge?
As AI capabilities compound rapidly, there is tremendous temptation to apply them wherever they fit most naturally. Without intentionality, this approach risks quietly redefining disciplines by the tasks the technology performs well. Because AI can efficiently process symptoms, match protocols, and renew prescriptions, we might start to define medicine as these specific tasks — in much the same way that because we can measure steps, sleep scores, and VO2 max, we’re tempted to define health as the optimization of dashboard metrics. As Kate Crawford astutely warned, we must not let the “affordances of the tools become the horizon of truth.”
This tension extends to biopharma R&D as well. Here, efforts to leverage AI have succeeded in limited domains with dense data and established benchmarks, but have struggled where the critical data are scarce, highly conditional, or both — as Andreas Bender, in particular, has eloquently discussed.
We’re always tempted to look where the light is. But difficult as it can be to maintain focus on what actually matters, rather than what technology most readily delivers, it can be done.
A Company Built on What Matters
For some time now, I’ve argued — in this space, at KindWellHealth, and elsewhere — that genuinely enhancing human flourishing requires attention to three broad dimensions: physiology (movement, nutrition, recovery, preventive screening), agency (your belief in your ability to shape a better future), and connection (the value of meaningful relationships and purposeful pursuits).
The news that caught my attention recently was that someone independently built a business around exactly this framework. Unbound, a UK-based preventive health company operating from a single just-opened location in London’s Shoreditch, describes itself as “built on the belief that physical, mental and social health are inseparable.”
Several design choices distinguish Unbound from the optimization-culture norm. They measure connectedness alongside biomarkers — literally assessing social connection as a clinical input. Their medical director, Dr. Elliott Roy-Highley, frames health as “not merely the result of internal cellular mechanics, but an emergent property of social integration, purpose, and communal regulation.” A coffee shop replaces the waiting room; community circles, run clubs, and art exhibitions aren’t wellness window-dressing but structural commitments – the social environment is treated as meaningful part of the intervention.
Perhaps most distinctive is a post-assessment “future self” exercise — an evidence-backed positive psychology intervention that asks participants to envision their optimal future self and identify personal barriers to achieving that vision. By strengthening the psychological connection between present and future selves, the exercise enhances goal clarity, self-efficacy, and motivation for behavior change. This process works through narrative mechanisms — imagining, evaluating, and orienting toward personally meaningful goals –that translate assessment insights into actionable health strategies.
Crucially, Unbound doesn’t reject measurement and technology. They offer a companion app for extending connection and tracking recommendations beyond the clinic; their assessments integrate blood work and physical performance testing alongside the emotional and social components. As Unbound puts it: “Yes, we use tools like clinical testing — but not as a way to measure your worth or push you to chase perfection. We use them to guide and support a much bigger goal: helping you live the life you want, with clarity and confidence.” The intent: leverage science and technology with intentionality, pointing them where they should be aimed, rather than where they are most inclined to go.
Of course, there’s a large gap between a compelling concept and improved health. It’s possible Unbound will prove to be savvy wellness marketing aimed at motivated, affluent urbanites. The people who walk into a trendy Shoreditch health studio are already relatively motivated and likely already drawn to purposeful engagement. The evidence that the program actually improves health, while theoretically grounded, remains to be seen.
But the interest Unbound has attracted reveals a substantial appetite for something beyond relentless metric optimization — and there’s little in their approach that seems especially proprietary. The same foundational principles — deepen connection, develop agency, attend (with compassion) to physiology — all could be applied at scale by incumbents and digital platforms. Peloton, for instance, has the community infrastructure and the user engagement; what it lacks is a framework that extends beyond leaderboards and performance dashboards toward something that might help users not just perform but flourish.
Bottom Line
GenAI is advancing at a pace that would have seemed fantastical even a year ago; the developments of the last few weeks have forced even seasoned skeptics to recalibrate. There is tremendous incentive — and good reason — to ride this technology wave toward compelling opportunities like the crisis in primary care. But as these capabilities compound, the central challenge will be ensuring the technology serves what patients and people actually need, rather than allowing those needs to be defined by what the technology most readily delivers. The risk of essentially reducing health to what can be optimized by technology is real, as so many tech-powered companies in healthcare, biotech, and fitness demonstrate. But it is also possible to leverage technology in service of a more complete and less reductive vision — attending to physiology, agency, and genuine human connection — as Unbound suggests, and hopefully, many others pursue.
Dr. David Shaywitz, a physician-scientist, is a lecturer at Harvard Medical School, an adjunct fellow at the American Enterprise Institute, and founder of KindWellHealth, an initiative focused on advancing health through the science of agency. This piece was previously published on the Timmerman Report