A couple of articles have recently led me to a familiar train of thought about how evidence-based medicine and anecdote can best be reconciled, and (a subsidiary worry), just how are we meant to know whether to believe what we read?
In 2014, the European Journal of Preventive Cardiology published a paper called “What proportion of symptomatic side effects in patients taking statins are genuinely caused by the drug?”. This led to a flurry of headlines announcing, variously, “Statins have no side effects” (the Telegraph) and – there were lots but let’s go for the Guardian: “Statin side-effects minimal, study finds.”
Fast forward to 2016, and a Telegraph headline runs “Painful side effects of statins are confirmed” (sorry Guardian readers, they didn’t cover the news). So that’s sorted that then. Only of course it hasn’t because for each article, the media story only tells a fraction of the tale. And debate will continue, in part because of the fascinating “nocebo” effect, whereby (to paraphrase) the dummy pill or placebo in a clinical trial causes side effects just like those of the real drug.
Back to the latest statin news, I was sufficiently intrigued by its apparently definitive nature that I looked at the original paper to see what they say about the 2014 study: surely one headline grabber will refer to the last, especially if they are in direct opposition? But they don’t mention it. So research – and media reports – continue, ping pong, patients left in the middle trying to figure out what to do/take/stop taking etc. There have been numerous attempts over the years to help people through this quagmire, such as the NHS resource, Behind the Headlines, which offers good general guidance in unpicking media reports, but hadn’t covered this story.
Where statins are concerned, it has been suggested by wiser folk than me that – as the most widely prescribed drug in the developed world – they should be the crowning glory of evidence based medicine, with data – including that on side effects – collected routinely and shared widely. Iain Chalmers wrote a beautiful opinion piece last year about this.
Yet their use is still mired in uncertainty, and I can see why: even if we were really good at following all the rules of the research road for collecting evidence about the effects and side effects of drugs in use, we still need to be sensitive to the very real experiences of the tens or even hundreds of millions of folk who take a statin a day. Whatever population studies tell us, personal experience matters. Indeed, having been drawn into the Telegraph by the statin story, I was intrigued to see an article called “Do knee ops work? Ask the patients…..” in which improvement brought about by scraping out the knee (arthroscopy) is said (by research) to be due to coincidence or the placebo effect. But as the article points out “This illuminates a recurring problem in evaluating the findings of clinical trials that assess the outcome of such interventions in thousands of patients. To be sure, the aggregated results may show little overall benefit, but this can readily conceal the indisputable benefit for a minority.”
Almost every day that I work in patient and public involvement (PPI), something leads me to dwell on this question of how we can best mesh what science tells us is the truth, with what the man/woman/child I am talking to is certain is their truth. The statin and knee stories above highlight this, and our James Lind Alliance partnerships, in which patients, carers and clinicians are asked to suggest research priorities, repeatedly reinforce the fact that research doesn’t always address their needs, as they call for more studies into side effects or complementary or alternative therapies.
I can be as sceptical as the next woman when friends tell me with certainty that asparagus cured their ingrowing toenail, yet can I be sure they are wrong? There may be no trial to tell me so, or there may be one that’s about to be overturned by a different one, or any such studies may simply have been insufficiently sensitive to individual variation to detect the effect of that asparagus on that toe.
On the flip side, I have seen fear on the faces of researchers when, drawn reluctantly into working with patients as partners, they are confronted with views that may differ from each other as well as from their own. What’s a doc to do when the evidence-based mantra is running through their minds but the real world is looking very different?
These are just some of the perfectly imperfect conundrums about PPI that keep me awake; that at times make me want to throw in the towel. Until I remind myself just how important it is that we all get better at trying to see both parts of the equation. Side effects and all.