Saturday, April 11, 2026

Clear Press

Trusted · Independent · Ad-Free

AI Clones of Health Influencers Will Charge You for Advice Around the Clock

New platform Onix promises 24/7 access to digital twins of wellness experts, raising questions about medical guidance and commercial conflicts.

By Dr. Kevin Matsuda··5 min read

A new platform is betting that people will pay to chat with artificial intelligence versions of their favorite health and wellness influencers — at any hour of the day or night.

Onix, which launched this week, bills itself as a "Substack of bots," allowing creators in health, fitness, nutrition, and wellness to build AI replicas of themselves that can interact with subscribers continuously. According to reporting by WIRED, the startup aims to help influencers monetize their expertise beyond traditional content creation while providing followers with personalized guidance on demand.

The concept raises immediate questions about the nature of medical and health advice in an era of generative AI. Unlike a blog post or video that users consume passively, these AI twins engage in back-and-forth conversations that could feel remarkably similar to actual consultations with healthcare professionals.

How the Platform Works

Creators on Onix train their digital twins using their existing content, expertise, and communication style. Subscribers then pay for access to these AI versions, which can answer questions, provide guidance, and maintain ongoing conversations about health topics ranging from nutrition plans to exercise routines.

The platform's creators see this as democratizing access to expert knowledge. Rather than waiting for a creator's next Instagram post or YouTube video, followers can get immediate responses tailored to their specific situations. For influencers, it represents a new revenue stream that doesn't require their constant personal attention.

But the model also introduces a commercial layer that complicates the advice-giving dynamic. As WIRED noted, these AI twins could potentially promote the creator's products during conversations — turning what feels like personalized health guidance into a sophisticated marketing channel.

Medical and Ethical Concerns

The medical community has grown increasingly wary of health advice delivered through social media influencers, even when those influencers are actual humans. Automating that advice through AI systems introduces additional layers of risk.

AI language models, despite their impressive conversational abilities, can generate plausible-sounding but incorrect information — a phenomenon researchers call "hallucination." In health contexts, such errors could range from merely unhelpful to genuinely dangerous. An AI trained on a nutritionist's content might confidently recommend an eating plan that fails to account for a user's specific medical conditions or medications.

There's also the question of accountability. When a human expert gives advice, there are professional standards, licensing requirements, and legal frameworks that govern their conduct. When an AI replica of that expert dispenses guidance, who bears responsibility if something goes wrong?

The platform appears to focus on wellness and lifestyle advice rather than direct medical care, but that line can blur quickly. A conversation about nutrition can easily venture into managing diabetes. Fitness guidance might touch on cardiac health. The AI doesn't know when it's crossed from general wellness into medical territory.

The Broader Trend

Onix represents the latest example of AI being deployed to scale human expertise. Similar concepts have emerged across industries — AI teaching assistants trained on professors' materials, customer service bots that mimic company founders, even AI therapists based on therapeutic approaches.

The health and wellness space has proven particularly attractive for such applications because demand consistently outstrips supply. People want personalized guidance, but human experts have limited time. AI promises to fill that gap.

Yet healthcare has traditionally moved cautiously with automation for good reason. The stakes of getting things wrong are high, and the complexity of individual human health often defies simple pattern matching.

Questions of Authenticity

There's also something philosophically unsettling about the concept. When someone seeks advice from a trusted expert, they're not just accessing that person's knowledge base — they're benefiting from their judgment, their ability to recognize nuance, their accumulated wisdom from seeing thousands of cases.

An AI trained on someone's content can mimic their communication style and recall their stated positions, but it doesn't possess their actual judgment. It's a sophisticated prediction engine, generating responses based on statistical patterns in training data, not a thinking entity that can truly assess a unique situation.

For users, the experience might feel authentic enough to be useful, or it might fall into an uncanny valley where the AI is convincingly human-like in some ways but frustratingly limited in others. The real test will be whether people find value in these interactions or whether the limitations become apparent quickly.

Commercial Considerations

The business model raises its own set of questions. If an AI version of a supplement company founder is recommending products during health consultations, is that disclosed clearly enough? Can users distinguish between advice based on their best interests and advice influenced by commercial considerations?

Traditional advertising and sponsored content have evolved disclosure requirements precisely because people deserve to know when they're being marketed to. But in a conversational AI context, where product recommendations might emerge naturally within a longer discussion, those lines could easily blur.

The platform will need to navigate these issues carefully to avoid regulatory scrutiny or user backlash. Health-related claims are already subject to significant oversight from bodies like the Federal Trade Commission, and automating such claims through AI doesn't exempt them from those rules.

What This Means in Practice

For now, Onix appears to be targeting the wellness influencer market rather than licensed healthcare providers. That may help it avoid some regulatory hurdles, but it doesn't eliminate the fundamental concerns about quality and safety of advice.

The platform's success will likely depend on how well it manages user expectations and how effectively creators can train their AI twins to provide genuinely useful guidance without overstepping into areas requiring professional medical judgment.

As AI continues to permeate every aspect of digital life, experiments like Onix are inevitable. The question isn't whether AI will be used to scale expert knowledge, but how to do so responsibly — with appropriate guardrails, clear disclosures, and realistic understanding of both the technology's capabilities and its limitations.

For users tempted by 24/7 access to their favorite health influencers, the advice remains the same as it's always been: consider the source, maintain healthy skepticism, and consult actual licensed professionals for serious health concerns. An AI, no matter how well-trained, is still just a sophisticated text generator — not a substitute for genuine medical care.

More in health

Health·
Your Lifestyle Choices in Midlife May Shape Your Dementia Risk More Than You Think

New research reveals that modifiable behaviors — not just genetics — play a surprisingly large role in early-onset dementia.

Health·
Major Study to Track Lifelong Health Impact of Blood Disorders Across England

Researchers will link health records and patient surveys to understand how sickle cell disease, thalassemia, and leukemia affect lives decades after diagnosis.

Health·
Living With Childhood Dementia: One Family's Journey as Their Daughter Faces a Devastating Diagnosis

Sophia was diagnosed with childhood dementia just before turning four — now 15, she's lost the ability to speak and walk independently.

Health·
AI Chatbot Identifies Rare Condition After Years of Medical Misdiagnosis

A British woman finally received proper treatment after ChatGPT suggested a diagnosis that eluded multiple specialists.

Comments

Loading comments…