Q. What is the difference between Type I and Type II error?



The following chart is often provided to explain type I and type II Error. \(H_0\) represents the null hypothesis, and \(H_1\) is the alternative hypothesis.

The chart here shows a two by two table, where the two columns represent reality, where the null hypothesis is actually true, or the alternative hypothesis is actually true. The rows represent the test conclusions, do not reject the null, and reject the null. Each cell represents either a type of error or a correct conclusion. For example, when the null hypothesis actually true, and the conclusion was to reject the null, this is type one error. On the contrary, when the alternative hypothesis is actually true, but we do not reject the null, this is type two error. The last two cells represent correct conclusions.

Type I error can also be thought of as the rate of false positives, and type II error is considered the rate of false negatives. The rate of false positives is represented by \(\alpha\) (the significance level).

While no type of error is desirable, you can’t decrease the likelihood of one type without increasing the likelihood of the other. When thinking about error, it is useful to think about which type is worse in the context of the experiment. For example, if you are developing a diagnostic test for a deadly disease, it would be better to falsely diagnose a healthy person with the disease (type I error) than fail to diagnose a sick person who will die from the disease (type II error).


  • Last Updated Apr 23, 2021
  • Views 0
  • Answered By Dorian Frampton

FAQ Actions

Was this helpful? 0   0