Vurdering av hvor mye tilfeldigheter påvirker rettferdige tester

Tilfeldigheter kan få oss til å begå to typer feil når vi skal fortolke resultatene av rettferdige behandlingssammenlikninger: vi kan konkludere med at det foreligger forskjeller i behandlingsutfallet som faktisk ikke eksisterer, eller også kan vi tro at det ikke er forskjeller selv om de virkelig finnes. Jo flere observasjoner vi gjør, desto mindre er sannsynligheten for å bli villedet av tilfeldigheter.

Sammenlikninger kan ikke gjøres for alle som har fått eller kommer til å få behandling for en aktuell tilstand. Det vil derfor aldri være mulig å finne den ”sanne forskjellen” mellom behandlingsmetoder. Studier gir i stedet velkvalifiserte gjetninger om hva de sanne forskjellene er.

Påliteligheten i estimerte forskjeller angis ofte ved bruk av ”konfidensintervall” (confidence interval = CI). Det er overveiende sannsynlig at den sanne forskjellen befinner seg innenfor det angitte konfidensintervallet. De fleste mennesker er allerede kjent med hva et konfidensintervall er, selv om ordet i seg selv kan være ukjent.

The 95% Confidence Interval (CI) for the difference between Party A and Party B narrows as the number of people polled increases.

The 95% Confidence Interval (CI) for the difference between Party A and Party B narrows as the number of people polled increases (click to enlarge).

For eksempel kan en meningsmåling i forkant av et politisk valg rapportere at parti A ligger 10 prosentpoeng foran parti B, men rapporten vil da ofte opplyse om at forskjellen mellom partiene kan være så liten som 5 poeng eller så stor som 15 poeng. Dette “konfidensintervallet” angir at den reelle forskjellen mellom partiene sannsynligvis befinner seg et sted mellom 5 og 15 prosentpoeng.

Jo flere mennesker som deltar i meningsmålingen, desto mindre usikkerhet knytter det seg til resultatet, og konfidensintervallet blir følgelig smalere.

Akkurat som man kan fastsette graden av usikkerhet rundt en estimert forskjell i andelen velgere som støtter to politiske partier, kan man også fastsette graden av usikkerhet knyttet til en estimert forskjell i andelen pasienter som blir bedre eller dårligere etter to typer medisinsk behandling. Her gjelder det samme: Jo flere pasienter man sammenlikner – la oss si antall som blir friske etter et hjerteinfarkt – desto smalere blir konfidensintervallet som omgir estimatet for behandlingsforskjellen. Dess smalere konfidensintervall, desto bedre.

Et konfidensintervall følges gjerne av en indikasjon på hvor sikre vi kan være på at den sanne verdien ligger innenfor det angitte området. Et 95 % konfidensintervall betyr for eksempel at vi kan være 95 % sikre på at den sanne verdien av det som estimeres ligger innenfor konfidensintervallets bredde. Det betyr at det er 5 % (5 av 100) sjanse for at den sanne verdien ligger utenfor intervallet.

  • Steve George

    Overall this is a superb book and website. However, the stated meaning of ‘confidence interval’ is not correct. Maybe this is an intentional simplification because the book and website are intended for a broad audience. However, it makes one suspicious about other claims made by the authors if one of the important aspects is wrong. The correct meaning of a 95% confidence interval is that 95 out of 100 confidence intervals obtained in the same way (same population and same sample size) will include the true mean. To say that there’s a 95% chance that the true mean lies within the confidence interval would mean that there many different true means, and 95 out of 100 of them fall within this particular confidence interval. Of course there is only one true mean, and it will lie within 95 out of 100 similarly-obtained confidence intervals.

    • Anonymous

      Many thanks for your kind words Steve, and I am sure that the team will want to make sure that everything is as accurate as it can be.

      You are right, in that the intention is to explain confidence intervals for an informed lay reader. I know from experience that this is not easy, and that sometimes an approximation is easier to understand.

      Stay tuned and I will see what they say.

    • Paul Glasziou

      Thanks for your complimentary remarks about the book. We might have used a different approach in our effort to explain confidence intervals, and we discussed this when writing the section. The deliberate simplification we used reflected our experience of trying to explain the precise frequentist interpretation of confidence intervals to lay audiences: this approach either seems to confuse them or goes over their heads. We could also have used Credible Interval and a uniform prior to match our more Bayesian explanation (http://en.wikipedia.org/wiki/Credible_interval), but that is not the term people are likely to come across.
      We are currently searching systematically for formal comparisons of the extent to which among alternative wording to explain research methods most helps lay people to get the right end of the stick. This is one of several issues that we would like to see addressed empirically to improve the evidence base needed to support better understanding of health research. Please let us know if you would like to be involved, and we would also encourage you and readers to become involved in http://www.nsuhr.net – An international Network to Support Understanding of Health Research.

  • Robert42

    Confidence intervals represent the uncertainty of an estimate attributable to sampling error. Small sample, bigger error, broader confidence interval. Big sample, smaller error, narrower confidence interval. If the sample encompasses all of the sample frame the uncertainty falls to zero and the confidence intervals disappear.

    A 95% confidence interval means that if we were to repeat our test 100 times, the calculated confidence intervals would encompass the mean arrived at through full and complete coverage of the sample frame roughly 95 times. This mean is not a ‘true’ value. Confidence intervals only represent the uncertainty from sampling. There will be other errors in the measurement system that will forever keep us from knowing the ‘truth’. This is why Deming insists that there is no truth in measurement and the idea there is, is so destructive to understanding statistical analysis.

    To conclude say, a difference between a treatment and control group is statistically significant at the 95% level, only means our experiment would come to similar estimates 95% of the time. It doesn’t mean the difference is real or true. (After all, we already knew the two groups were different.) All manners of statistical significance are comments on the measurement system used, not the reality being measured.