How to Read a Paper
Assessing Methodological Quality
When analyzing a paper for its methological value, a number of questions may be considered:
1) Was the study original?
- is the study bigger, longer, or more substantial in other ways?
- is the population different than previous studies?
2) Who is the study about?
- how were subjects recruited?
- who was included?
- who was excluded?
- were subjects studied in 'real-life' circumstances?
3) Was the study design sensible?
- what intervention was being considered, and what was it being compared with?
- what outcome was measured, and how?
4) Was clinical bias avoided or minimized?
- groups should be as similar as possible, with the exception of the particular difference being measured
- selection bias, performance bias, exclusion bias, detection bias
5) Was assessment 'blind'?
6) Were preliminary statistical questions dealt with?
- sample size: a trial should be big enough to detect as statistically significant a worthwhile effect if it exists and be reasonably sure it doesn't exist if not found
- duration of follow-up: a study must be long enough to provide evidence of the intervention effect
- completeness of follow-up: subjects who withdraw are more likely to have experienced side-effects and missed check-ups and less likely to have followed trial protocol. Ignoring people who have withdrawn from a trial will bias results, usually in favour of the intervention. As such, standard practice is to analyze results on an 'intention-to-treat' basis
Study Validity
There are many things that can erode a study's validity.
Accuracy
One of the most significant is the study's accuracy - the degree to which the study's findings are free from error. Accuracy involves two components:
Precision/Reliability
- the degree to which random, or nonsystematic error, is free from the study.
- random error can result from random measurement error or sampling error
- reliability is the degree to which results can be replicated: inter-rater or intra-rater
- Cohen's Kappa is a statistic designed to measure reliability: below 40% is poor; above 75% is excellent
Validity
- the degree to which systematic error is absent
- measures applicability of results to sample (internal validity) and to population (external validity, also called generalizability)
- internal validity: degree to which results are not due to bias or confounding
- external validity: affected by topic or subject matter
return to top
Bias
Bias is the systematic deviation from truth due to any trend in the collection, analysis, interpretation, publication, or review of data
- effect on internal validity is the more serious
selection bias
- occurs primarily in design phase
- threat to internal validity: participants chosen from different target groups
- threat to external validity: systematic differences in those who take part from the population as a whole
- self-selection can lead to poorer results, ie if the sickest patients are most willing to try new therapies
measurement bias
- how the instrument (a scope or a survey) might systematically over- or under-estimate what it is trying to do
- recall bias: systematic differences in the accuracy of completeness of recall of past events or experiences
- interviewer bias: subconscious or conscious gathering of selective data
controlling bias
- the key is prevention in design and execution
- randomization, blindization, and stanardization are great
return to top
Confounding
Confounding is the confusion of the effects of variables, where an additional variable may be responsible for an apparent assication or outcome. Confounding leads to systematic error, and is actually a form of bias.
- confounders can either cause or prevent outcomes of interest, and is associated with, but not caused by, the risk factor/exposure
- confounders are distributed differently across study groups, leading to its differential effects on outcome
- stratifying by confounders and again looking for relationship helps identify their presence and control for them
Methods for Controlling for Confounding
- Designing a randomized trial, restricting who can participate, or matching participants according to confounders
- Doing a stratified analysis to separate participants into subgroups, or include the confounder (or multiples) in logistic regression models
return to top
Standardization
A technique for removing, as much as possible, the effects of differences in confounding variables when comparing two or more populations.
Standardization is an adjustment of the crude rate of a health-related event to a rate comparable with a standard population.
return to top
Event Modifiers
Event modifiers are third variables that aliter the direction or strength of association between two other variables. They are useful things to know and should be looked for
- differs from confounders in that its influence varies depending on its strength
- can be analyzed using stratification or regression
return to top