Unit 11: Hypothesis Tests

Home | Contact us   
  Main Concepts | Demonstration | Teaching Tips | Data Analysis & Activity | Practice Questions | Connections | Fathom Tutorial | Milestone 


 Teaching Tips

• The hypothesis test answers the question "Am I surprised by my sample?" The p-value answers the question "How surprising is my sample?" The confidence interval answers the question "What values of the population parameter would cause me not to be surprised by the sample?"

• Words are very important in this unit. Students will be penalized for sloppy language and interpretations throughout their lives.

• It is good to look for (a) clearly stated hypotheses (b) conditions and/or assumptions are stated and checked (c) the mechanics of calculations (d) conclusion stated in context. It's important that the conclusion (part d) refer back to the mechanics (part c) so that the student makes perfectly clear how his or her decision was made.

• Sometimes you can just look at the data and see the null hypothesis is clearly false. (For example, if a histogram shows that every observation is greater than the null hypothesis value.) Students should not be taught to distrust graphical evidence, but taught that the hypothesis test is a way of formalizing what they see.

• Students should recognize when to use two-tailed versus one-tailed hypotheses.

• Two-tailed hypothesis tests and confidence intervals are connected as both use the Central Limit Theorem. If we reject null hypothesis using a test, then the hypothesized population parameter will not be in a confidence interval constructed using the same sample. It is useful to show students examples of this.

• One useful way to remind students that we always assume null hypothesis is true during a hypothesis test is the “innocent until proven guilty” analogy.

• Like with confidence intervals, don’t gloss over assumptions.

• The 1% and 5% significance levels are historical artifacts. (Fisher thought 5% was about right.) Let your students know that p-values are often better communication than simply stating "reject" or "fail to reject".

• Like with confidence intervals, it is important to have students look at examples of good and bad interpretations of p-values in context. A lot of practice will help them to understand which interpretations are correct and why.

• Context! Make sure students just don’t stop at “reject” or “fail to reject” or “p-value means …”. They need to remember the big picture and explain how the results of the test answer the original question they were trying to investigate. They need to put their answers in context in such a way that they should be able to explain their results to someone who doesn’t know statistics.

• Like with confidence intervals, do a lot of practice looking at the effect of changing various components in the test to make sure students understand intuition behind why things change the way they do. For example, you can explain the following:
  • Effect of sample size: the bigger the sample, the more accurate our estimate from it is because we have more information
  • Effect of confidence level: We’re 100% sure that the true parameter can be anything, we’re 95% sure that the true parameter can be in a given interval, but we’re only 1% sure that the true parameter can be in some teeny tiny interval
  • Effect of sample standard deviation: if the sample we take is all over the place, then it’s hard to tell where the true population parameter is; if the sample is pretty concentrated, then we might have a slightly better idea of where the true population parameter lie
• Power calculations are shown in some textbooks, and your understanding might be helped by struggling with a power calculation on your own. Students should be expected to understand what power is and how it is related to Type I and Type II error, and sample size and significance level and population standard deviation. Review Allan's batting average example and the flat tire example from the INSPIRE summer workshop.

• The null hypothesis can have inequalities, but this makes it impossible to discuss power analysis and the interpretation of significance level.

Student Misconceptions and Confusions

• Students find the null hypothesis difficult to identify in context. They need to practice writing null and alternative hypotheses.

• Neither the p-value nor the significance level is the probability that the null hypothesis is true. The null hypothesis is either true or false and probability statements don't apply. Remember that the p-value is a conditional probability: we assume the null hypothesis is true when computing it.

• If the evidence is insufficient to reject the null hypothesis, don't write or say that you "accept" the null hypothesis. The best you can say is that you "fail to reject" the null hypothesis. The null hypothesis might still be false.

• Students have trouble putting the conclusion of a hypothesis test in context because they get caught up with vocabulary and methodologies. Remind them that they are just answering a simple question. Also remind them that they are determining whether or not they have enough evidence to reject their null hypothesis, so being able to state that hypothesis in context is important for their conclusions.

• Having small sample size doesn’t mean that a result is invalid if assumptions are met.


• Batting average and flat tire examples from the INSPIRE summer workshop