# Common Mistakes in Using Statistics - Spotting Them and Avoiding Them

## May 20 - 23, 2013

### Course Notes

• Files are in pdf format.
• Most students will want to download the slides and either print them to take notes on in class, or  follow along in class on their laptop.
• The appendices contain additional material and references. They are available for your reference later according to your own needs.
• Copies of course materials will not be handed out in class.
• Computers for individual  use will not be available in the classroom for this particular course.
• If you need a different print size or would prefer a .doc file to take notes on, please email me so I can email you .doc files to adjust to your needs. (Past experience is that .doc files often acquire changes in formatting or symbols when posted on the web; this might also happen with email attachments on some platforms.)

Day                                             Slides (2 per sheet)                                Appendix

1 (M May 20)                                     Slides Day 1                                    Appendix Day 1

2 (Tu May 21)                                    SlidesDay2Part1(1-29)                (No appendix for Day 2)
all three files for Day 2)                SlidesDay2Part3(31-58)

3 (Wed May 22)                                  Slides Day 3                                    Appendix Day 3

4 (Th May 23)                                     Slides Day 4                                    Appendix Day 4

Empirical Probability Example

Wise Sampling Distribution Simulation

Rice Virtual Lab in Statistics Sampling Distribution Simulation

Bioconsulting Confidence Interval Simulation

R. Webster's Confidence Interval Simulation

W. H Freeman's Confidence Interval Simulation

The Rice Virtual Lab in Statistics Confidence Interval Simulation

Rice Virtual Lab in Statistics Robustness Simulation

Claremont University's Wise Project's Statistical Power Applet

Jerry Dallal's Simulation of Multiple Testing
This simulates the results of 100 independent hypothesis tests, each at 0.05 significance level. Click the "test/clear" button  to see the results of one set of 100 tests (that is, for one sample of data). Click the button two more times (first to clear and then to do another simulation) to see the results of another set of 100 tests (i.e., for another sample of data). Notice as you continue to do this that i) which tests give type I errors (i.e., are statistically significant at the 0.05 level) varies from sample to sample, and ii) which samples give type I errors for a given test varies from test to test. (To see the latter point, it may help to focus just on the first column.)

Jelly Beans (A Folly of Multiple Testing and Data Snooping)

Negative Consequences of Dichotomizing Continuous Predictor Variables

p-value video (amusement)