AI overviews in search: what is the impact on health content?
- Stephanie Mackay Stokes
- May 9
- 4 min read

AI summaries are changing how patients find health advice online – but are they getting it right? Many of us are familiar with turning to Google when a health question strikes. Whether it’s a late-night symptom search or a quick check before a GP appointment, most of us lean on search engines for a sense of clarity.
Since Spring 2024, search engines (not just Google, but also Bing, Yahoo and DuckDuckGo) have rolled out AI-generated overviews for health topics, designed to give you an immediate answer right at the top of the page.
The traditional role of SEO is to generate clicks to your website, but these AI overviews (AIOs) are resulting in ‘zero-click’ searches by answering people’s questions without them needing to click on a search result.
For many of our clients – whether in pharma, private healthcare or digital health – AI overviews raise important questions: will patients still find our carefully crafted content? How can we ensure regulatory messages and risk information are not lost in oversimplified AI summaries? And how do we maintain trust when we can’t control the first answer people see?
Why AI health summaries can be risky
AI models are excellent at recognising patterns, summarising large volumes of data, and mimicking tone, choosing content based on patterns – not always clinical relevance. AIOs may provide quick and convenient answers to search queries, but with ‘Your Money or Your Life’ (YMYL) content like health, we need to be sure nuanced, patient-safe guidance gets seen by surfacing appropriate information and prioritising accordingly.
Sometimes, AIOs result in well-meaning but risky advice, like suggesting mint leaves for appendicitis (1). Other times, it’s what gets left out that’s most concerning – a search on jaw pain, for example, returned dental causes but did not mention that it can also be a sign of a heart attack (2).
Imagine a patient searches for a medication name. The AI summary provides a general use case but omits essential safety advice or black box warnings. The patient may be misinformed – and your compliant, patient-centred content may never be seen.
Health content isn’t just about pulling facts, it’s about context. Understanding how to explain risks clearly, or when not to give false reassurance, are judgment calls that require expertise, empathy, and editorial rigour.
That’s where human-written, health literate content still plays an essential role.
Do people trust AI overviews for health queries?
We ran a poll on LinkedIn to ask people in our network what they think.
The verdict: Helpful, but not fully trusted
We asked how much trust people place in Google’s AI-generated health summaries:
64% said they trust it occasionally
36% said they trust it most of the time
0% said they always trust it
0% said they don’t trust it at all
While no one fully dismissed the summaries, no one was ready to rely on them entirely either. It seems people appreciate the convenience but they’re cautious. And for good reason.
Why your health content matters even more now
According to a study by Ahrefs, the emergence of AIOs in search engines has caused a 34.5% drop in clicks for many keywords. This reinforces reports of site owners suggesting a 20-40% drop in clicks.
But while AI might answer a quick question, when people are facing a serious concern, or ready to take action, for YMYL decisions such as healthcare, they still seek out a trusted, expert source. That’s where your content comes in.
Even in an AI-first search landscape, your content plays a critical role in building trust and converting interest into action. AI overviews are simply filtering out casual browsers, so that the people who do reach your site are better leads, with intent. They may be ready to book, refer, or resolve a problem. And for clients working in regulated healthcare, the shift to AI-generated summaries isn’t just an SEO concern – it’s a safety and brand integrity issue.
That’s why we’re working with many of our clients to create high-quality content that brings expertise, experience, authority and trust (EEAT) through a mix of techniques, including:
Medical accuracy – through expert review, insights and research
Regulatory knowledge – ensuring claims are compliant and ethical
Patient experience – testimonials, reviews and stories to bring unique, human elements to content
Editorial judgement – to clarify without oversimplifying with health literacy techniques
As health communicators, our role isn’t just to explain, it’s to make complex, emotionally loaded information feel accurate, trustworthy, and empowering, prompting a decision and an action. AI might provide a starting point, but it can’t replace the critical thinking and strategic decision-making behind content that’s safe, usable, and genuinely patient-focused.
Why human-led health content still drives results
As AI continues to grow with improved accuracy and user experience, it’s likely that trust will grow too. For now, human-written content still leads the way. In a recent analysis, it outranked AI-generated content over a period of five months with 5.44x more traffic.3 That’s a clear sign that accuracy, nuance, and trust are still best delivered with a human touch.
At Wallace, we believe the future of health communication is hybrid, AI-supported, but human-led. It’s not about resisting innovation, it’s about using it wisely, with the patient still at the centre.
We’re working with clients to adapt their digital content strategy for this AI-first landscape. If you’d like to discuss how AIOs could affect your business and to build trust with your audience through clear, accurate, and empathetic health content, get in touch, we’d love to help. Get in touch.
Comentarios