Month: September 2023

Identifying Potential Biases in Diagnostic Codes in Primary Care Electronic Health Records: What We Need to Know

Electronic healthcare records (EHRs) are increasingly being used to collect and store data on patient care. This data can be used for a variety of purposes, such as improving clinical care, conducting research, and monitoring population health. However, it is important to be aware of potential biases in EHR data, as these can lead to inaccurate or misleading results..

The reliability of diagnostic codes in primary care EHRs is a subject of ongoing debate and a topic we investigated in paper published in BMJ Open.

These codes not only guide clinical decisions but also shape healthcare policies, research, and even financial incentives in the healthcare system. A recent retrospective cohort study explored whether the frequency of these codes for long-term conditions (LTCs) is influenced by various factors such as financial incentives, general practices, patient sociodemographic data, and the calendar year of diagnosis. The study comes at a crucial time, shedding light on significant biases that need to be addressed.

Key Findings

The study, which involved data from 3,113,724 patients diagnosed with 7,723,365 incident LTCs from 2015 to 2022, revealed some significant findings:

Influence of Financial IncentivesConditions included in the Quality and Outcomes Framework (QOF), a financial incentive program, had higher rates of annual coding than those not included (1.03 vs 0.32 per year, p<0.0001).

Variability Across GPs: There was a significant variation in the frequency of coding across different General Practices, which was not explained solely by patient sociodemographic factors.

Impact of Sociodemographic factors: Higher coding rates were observed in people living in areas of greater deprivation, irrespective of whether the conditions were part of QOF or not.

Covid-19The study noted a decrease in code frequency for conditions that had follow-up time in the year 2020, likely due to the COVID-19 pandemic affecting healthcare services.

Implications for Healthcare Providers and Researchers

The findings of the study raise some pertinent questions:

Addressing Financial Incentives: If the QOF influences coding rates, how can we ensure a level playing field for conditions not included in such programs? This could impact resource allocation and healthcare planning.

Standardizing Practices: The variability in coding across GPs implies that there might be inconsistencies in how conditions are diagnosed and recorded. These inconsistencies need to be addressed to improve the quality of healthcare.

Considering Sociodemographic factors: The influence of patient sociodemographic factors suggests a need for tailored interventions, especially in areas with higher deprivation levels.

Navigating Pandemic-related Challenges: The reduction in coding during the COVID-19 pandemic indicates that external factors can significantly affect healthcare data. This demands adaptive strategies to ensure the ongoing reliability of EHRs.

Conclusions and Future Steps

As we move towards a more data-driven healthcare system, understanding the biases in primary care EHRs becomes crucial. The study suggests that natural language processing or other analytical methods using temporally ordered code sequences should account for these biases to provide a more accurate and comprehensive picture. By doing so, healthcare providers and policymakers can better tailor their strategies, ensuring more effective and equitable healthcare delivery.

Navigating the academic publishing process

I am sometimes asked by junior researchers or by the public how the academic publication process works. The academic peer review timeline varies depending on the journal, but it typically takes several months (sometimes even longer) from submission to publication.

1. Submission: You submit your paper to the journal. Make sure your paper is well-written, checked for spelling and grammatical errors, follows the journal’s style and formatting requirements, and that you submit your paper to a journal that is a good fit for your work.

2. Initial screening: An editor at the journal reviews your paper to make sure it is within the scope of the journal & meets the journal’s style and formatting requirements. Some articles are rejected at this stage, without external peer review (particularly, by larger journals). For example, articles may be rejected if they are outside the scope of the journal, if they are poorly written or have major methodological flaws, or do not include the relevant research checklist (such as STROBE or PRISMA). Other reasons for rejection include a lack of ethical approval or because the work duplicates something published elsewhere.

3. Peer review: The editor sends your paper to one or more external experts in your field for review. Reviewers are asked to assess the originality, significance, rigour of your research methods, & the validity of your work. They may suggest revisions to your paper or rejection.

4. Initial decision: The editor reviews the reviewers’ comments and decides whether to accept, reject, or revise your paper. Acceptance without any revisions is unusual as nearly all paper have scope to be improved. Generally, the authors have to respond to the comments from the referees and editor, and revise the paper before final acceptance.

5. Revisions: If your paper is accepted with revisions, you will be usually given a deadline to make the necessary changes. When sending back your revised paper, it is also normal practice to send a letter explaining how you have changed the paper in response to the comments.

6. Your response. Respond promptly to reviewer comments. Make sure your revisions are comprehensive and address all of the reviewer’s concerns and any comments from the editor. Be respectful and cooperative with the editor and reviewers. Finally, respond within the timescale given by the journal.

7. Final decision: Once your paper has been revised, it may be accepted without further changes; you may be asked to revise it again; or it may be rejected. Papers may be rejected if the authors do not adequately address the reviewer’s or editor’s concerns or if the revised paper still does not meet the journal’s standards. If accepted and no further changes are needed, the editor will send you a copy of the proofs for your final approval. This is your last chance to make changes.

8. Publication: Once you have approved the proofs, your paper will be published in the journal. Some journals (such as the BMJ) offer readers the opportunity to comment on a paper. It’s important to respond to these comments, which may sometimes highlight problems with your paper or suggest avenues for new research.

9. Responding to comments. When responding to comments, aim to be polit and respectful in your reply. Some comments can be constructive and others can be very critical of your paper. This post-publication review of a paper is an important part of the academic publication process. You can also engage with the broader public and research community through social media (for example, via Twitter or X). This increases the reach of your work including the likelihood it will be picked by the media or policy-makers.

10. The total time it takes to go through this process can vary from a few months to a year or more. It is important to be patient and to follow the instructions of the editor and reviewers. By doing so, you can increase the chances of your paper being published in a suitable journal.

The academic publication process is an important way to ensure the quality and accuracy of the scientific literature. By following the steps outlined in this article, researchers can increase their chances of getting their work published in a reputable journal

The Impact of Shielding and Loneliness on Physical Activity During the COVID-19 Pandemic

The COVID-19 pandemic had profound effects on many aspects of life, from healthcare to lifestyle habits. One of the most impacts has been the mental and physical well-being of individuals, particularly those who are older. Our study published in PLoS One aimed to quantify the relationship between shielding status and loneliness at the start of the pandemic and how these factors affected physical activity (PA) levels throughout the period. Conducted in London, the study surveyed 7748 cognitively healthy adults aged 50 and above from April 2020 to March 2021.


The study used the International Physical Activity Questionnaire (IPAQ) short-form to assess the physical activity levels of participants before the pandemic and six more times over the next 11 months. Linear mixed models were used to explore the relationship between shielding status and loneliness at the onset of the pandemic with physical activity over time.

Key Findings

Loneliness and Physical Activity

The study revealed that participants who felt ‘often lonely’ at the beginning of the pandemic completed significantly fewer Metabolic Equivalent of Task (MET) minutes per week during the pandemic. Specifically, they completed an average of 522 to 547 fewer MET minutes per week compared to those who felt ‘never lonely.’

Shielding and Physical Activity

Those who were advised to shield or self-isolate at the beginning of the pandemic also showed reduced levels of physical activity. They completed an average of 352 fewer MET minutes per week compared to those who were not shielding. After adjusting for demographic factors, the decrease was 228 fewer MET minutes per week.

Additional Factors

No significant associations were found between shielding, loneliness, and physical activity after further adjustments for health and lifestyle factors. This suggests that co-morbidities and health status also play an influential role.

Conclusions and Implications

The study indicates that those who were shielding or felt lonely at the start of the pandemic were likely to have lower levels of physical activity during the pandemic. Co-morbidities and health status also significantly influence these associations. Given the profound impact of physical activity on overall health, targeted interventions may be necessary to support these vulnerable populations in maintaining an active lifestyle, especially during challenging times like a pandemic.

For healthcare providers, public health professionals, and policy-makers, these findings underscore the need for comprehensive approaches that address not just the physical but also the psychological and social aspects of well-being, particularly for older adults. By understanding the interplay between these factors, we can aim for more effective public health strategies that promote a holistic approach to health and well-being, especially in times of crisis.

The Number Needed to Treat: Why is it Important in Clinical Medicine and Public Health?

You will often see the NNT mentioned in clinical guidelines; and when different health interventions are being prioritised or assessed for their clinical effectiveness and cost effectiveness. For example, the NNT was used to inform decisions to recommend statins for people with an elevated risk of cardiovascular disease.

The NNT is a measure used to quantify the effectiveness of an intervention or treatment. It is the average number of patients who need to be treated with a particular therapy for one additional patient to benefit.

How is NNT calculated?

In mathematical terms, the NNT = 1/[Absolute Risk Reduction]

Where Absolute Risk Reduction (ARR) = Control Event Rate (CER) – Experimental Event Rate (EER)

Control Event Rate (CER): The rate of an outcome in a control group.

Experimental Event Rate (EER): The rate of an outcome in an experimental group treated with the intervention.

For example, consider a drug that reduces the risk of heart attack from 4% to 2%. The ARR is 2% or 0.02 and the NNT is 50 (1/0.02). Hence, on average, 50 people will need to be treated to prevent one heart attack.

Importance in Clinical Medicine

The NNT is important in clinical medicine because it helps in the evaluation of the efficacy of treatments by offering a direct, patient-centred measure. It is also helpful in clinical decision making as it allows doctors and patients to make makes evidence-based decisions on treatment options. For example, when presented with data on the NNT, patients can consider how useful a medical intervention is for them.

The NNT also helps in the assessment of the balance between potential benefits and harms of treatment; and provides a uniform metric for comparing the effectiveness of different treatments.

Role of NNT in Public Health

The NNT is also important in public health because it provides a metric that can be used at a population level, offering insights into public health strategies; for example, it can help policy makers determine the most efficient use of healthcare resources. When combined with other metrics, the NNT can be a tool in assessing the cost-effectiveness of public health interventions such as preventive measures, screening and vaccination.

For example, the NNT was used by the UK JCVI to decide which population groups should be prioritised for booster Covid-19 vaccinations by considering how many people in different age groups would need to be vaccinated to prevent one hospital admission.

Limitations of NNT

The NNT does have some limitations. For example, it does not account for side effects or adverse reactions to medical interventions. It is also specific to the particular patient populations and settings from which the data to calculate the NNT was derived. For example, many adverse health outcomes are more common in older people. Hence, the NNT is not uniform over the population and will be lower in groups at higher risk such as the elderly.


Understanding NNT is crucial for both individual clinical decisions and broader public health strategies aimed at population health improvement. It provides an intuitive way to understand the practical impacts of treatment and public health interventions; and is a measure that is useful to many groups including policy makers, clinicians, public health specialists and patients.

Writing Your Student Essays and Dissertations: Some Tips on How to Do It Well

It’s that time of year when students are starting to enrol in higher education courses at universities and colleges. Every year, when marking essays and dissertations, I encounter numerous errors in students’ writing.

What are these errors, and how can you avoid them to make your dissertation more readable? Here are my top 10 tips for improving your academic writing:

1. Plan Your Outline

Most importantly, spend time planning the outline of your essay or dissertation. For dissertations, this means thinking about chapter headings and subsections for each chapter. Decide on the key tables, figures, and graphs you need to include to complement the main text. These visual elements should add value; they shouldn’t merely repeat what’s already said but should provide a different perspective or clearer illustration of your points.

2. Avoid Complexity

Many students assume that longer words are “more scientific” and thus preferable to shorter ones. For example, they might use “perspiration” instead of “sweat” or “haemorrhage” instead of “bleed.” Imagine if Winston Churchill had written his speeches in this “more scientific” manner.

3. Use Short Sentences

Shorter sentences are easier to read and help to ensure the examiner doesn’t miss the key points you’re trying to make. The same applies to paragraphs—don’t make them too long and look for natural breaks to start a new paragraph.

4. Choose Active Voice

Use active voice rather than passive voice in your text. For example, say, “I reviewed the literature,” rather than, “The literature was reviewed by me.” Active voice is easier to read, more direct, and makes it clear that you carried out the work.

5. Eliminate Superfluous Words

For instance, “based on” is better than “on the basis of,” and “even though” is preferable to “despite the fact that.” Eliminating unnecessary words gives you more room to present your work and helps you stay within the word count.

6. Use Clear Language

Use clear and professional language, and avoid clichés and colloquial expressions. These are seldom used in scientific writing and can be difficult for some examiners, especially non-native English speakers, to understand.

7. Master the Basics Early

When writing your dissertation, it’s not the time to be learning spelling, punctuation, and grammar. Most educational institutions offer writing assistance. Take these courses early in your program and invest in a good grammar and writing style guide.

8. Practice Scientific Writing

Many journals offer the opportunity to respond online to their articles. Use this opportunity to improve your critical thinking and argumentation skills. Working in a writing group can also be beneficial, as peer feedback can help you refine your work. There are also many guides available to help you improve your writing.

9. Study Good Examples

Read examples of excellent scientific writing to inspire your own work. For instance, consider reading “From Creation to Chaos: Classic Writings in Science” by Bernard Dixon.

10. Proofread Thoroughly

Before final submission of your work to the examiners, thoroughly check your spelling, punctuation, and grammar. You will be surprised how many errors can be easily caught with the spell and grammar check functions in word processing software.

Decoding Risk in Clinical & Public Health Practice: Absolute vs Relative Risk Reduction

What is the difference between Absolute Risk Reduction (ARR) and Relative Risk Reduction (RRR)? This is a common question from students and clinicians. Understanding these concepts is crucial for interpreting research findings, especially in clinical and public health settings.

Absolute Risk Reduction (ARR) refers to the difference in outcomes between a control group and a treated group in a clinical trial or an public health study.

Formula: ARR = CER – EER

Where: CER is the Control Event Rate (rate of event in the control group) and EER is the Experimental Event Rate (rate of event in the experimental group).

Example: Imagine a trial in which 10% of patients in the control group have an adverse event, and only 5% in the treatment group experience the same.

ARR = 10% – 5% = 5%

This means that the drug reduces the absolute risk of an adverse event by 5%. In total, 20 people need to be treated to prevent one event (the Number Needed to Treat, NNT).

Relative Risk Reduction (RRR) is the proportional reduction in outcomes between the treated and untreated groups. It’s a way to contextualize the effectiveness of a treatment by considering the baseline risk.

Formula: RRR = {(CER – EER)}{CER} \times 100

Example: Continuing with the same drug trial, RRR = {(10% – 5%)}{10%} \times 100 = 50%

Interpretation: The drug reduces the relative risk of an adverse event by 50% compared to the control group.

Key Differences between ARR and RRR

  1. Context: ARR gives you the actual change in risk, which is straightforward and easily interpretable. RRR puts this change in the context of the baseline risk, making the treatment appear seem more effective than it may actually be.
  2. Impact: ARR is more useful for understanding the individual benefit of an intervention, while RRR is often more impressive for public health interventions where a small absolute change can have a large impact when scaled up.
  3. Communication: RRR is often used in marketing or in media because it tends to produce a larger, more eye-catching number. However, this can be misleading if not used with the ARR, which provides a more direct measure of an intervention’s effect.
  4. Clinical Relevance: Knowing both ARR and RRR can aid in shared decision-making between clinicians and patients. While RRR can show the effectiveness of a treatment, ARR can guide on how much benefit an individual patient can expect.

By understanding both Absolute Risk Reduction and Relative Risk Reduction, clinicians and public health specialists can better interpret the data from clinical, public and epidemiological studies, and subsequently make more informed decisions about treatment options and public health interventions.