A u t o M e t a

Use of Confidence Intervals in Interpreting Nonstatistically Significant Results

author: Alexander T Hawkins, Lauren R Samuels
country:USA
Publication time: 2021-11
  • Title(CN):    Use of Confidence Intervals in Interpreting Nonstatistically Significant Results
  • Publication of journals:    JAMA.
  • Year/Volume/Issue/Page:    2021/326/20/2068-2069
  • language:    English
  • theme:    统计分析
  • Study type:    Meta分析,系统评价

abstract:

The goal of much of medical research is to determine which of 2 or more therapeutic approaches is most effective in a given situation. The power of a study is the probability of detecting a true treatment effect of a given magnitude and is highly dependent on the number of patients studied. When a retrospective observational study design is used, researchers have little or no control over the sample size, and thus little control over the power to detect a particular treatment effect. When such a study yields nonstatistically significant results (referred to as nonsignificant results in this article), an important question is whether the lack of statistical significance was likely due to a true absence of difference between the approaches or due to insufficient power. To address this issue, some researchers may consider conducting a power calculation for the completed study. However, power calculations—even for randomized clinical trials—are irrelevant once a study has been completed. Careful use of confidence intervals (CIs), however, can aid in the interpretation of nonsignificant findings across all study designs.

abstract(CN):

The goal of much of medical research is to determine which of 2 or more therapeutic approaches is most effective in a given situation. The power of a study is the probability of detecting a true treatment effect of a given magnitude and is highly dependent on the number of patients studied. When a retrospective observational study design is used, researchers have little or no control over the sample size, and thus little control over the power to detect a particular treatment effect. When such a study yields nonstatistically significant results (referred to as nonsignificant results in this article), an important question is whether the lack of statistical significance was likely due to a true absence of difference between the approaches or due to insufficient power. To address this issue, some researchers may consider conducting a power calculation for the completed study. However, power calculations—even for randomized clinical trials—are irrelevant once a study has been completed. Careful use of confidence intervals (CIs), however, can aid in the interpretation of nonsignificant findings across all study designs.

Go To Top