Common Mistakes in Using Statistics - Spotting Them and Avoiding Them

2014 Summer Statistics Institute Course, University of Texas at Austin

May 19 - 22, 2014 

Course Notes 

Please Note:

    Day                                             Slides (2 per sheet)                                Appendix

1 (Mon May 19)                                     Slides Day 1                                    Appendix Day 1

2 (Tue May 20)                                    SlidesDay2Part1(pp. 1-31)                (No appendix for Day 2)
    (Be sure to download                      SlidesDay2Part2(p. 32)
       all three files for Day 2)                SlidesDay2Part3(pp. 33-58)

3 (Wed May 21)                                  Slides Day 3                                    Appendix Day 3   
                                                            Supplement to Slides Day 3

4 (Th May 22)                                     Slides Day 4                                    Appendix Day 4                                      


Additional Appendices

    Suggestions for Readers of Research        Suggestions for Researchers

    Suggestions for Teachers                            Suggestions for Reviewers, Editors, and IRB Members

External Links

Please note: Some of these links use Java applets, which your computer might block (depending on the verion of Java you have and your security settings. For more information, see http://wise.cgu.edu/)
 
Empirical Probability Example

Wise Sampling Distribution Simulation

Rice Virtual Lab in Statistics Sampling Distribution Simulation

Bioconsulting Confidence Interval Simulation

R. Webster's Confidence Interval Simulation

W. H Freeman's Confidence Interval Simulation

The Rice Virtual Lab in Statistics Confidence Interval Simulation

Rice Virtual Lab in Statistics Robustness Simulation

Claremont University's Wise Project's Statistical Power Applet

Jerry Dallal's Simulation of Multiple Testing
This simulates the results of 100 independent hypothesis tests, each at 0.05 significance level. Click the "test/clear" button  to see the results of one set of 100 tests (that is, for one sample of data). Click the button two more times (first to clear and then to do another simulation) to see the results of another set of 100 tests (i.e., for another sample of data). Notice as you continue to do this that i) which tests give type I errors (i.e., are statistically significant at the 0.05 level) varies from sample to sample, and ii) which samples give type I errors for a given test varies from test to test. (To see the latter point, it may help to focus just on the first column.)

Jelly Beans (A Folly of Multiple Testing and Data Snooping)

More Jerry Dallal Simulations: More Jelly Beans    Cellphones and Cancer    Coffee and ...

Spurious Correlations

Negative Consequences of Dichotomizing Continuous Predictor Variables

p-value video (For your amusement; made by UT grad students)

Website on Common Misteaks Mistakes in Using Statistics

    Content similar to the content of the course notes, but includes embedded links and more information.

Blog: Musings on Using and Misusing Statistics

A companion to the preceding website Common Mistakes in Using Statistics. It contains updates to that site and occasional comments on other things related to statistics that come to my attention. It may  be of interest to the following categories of people:

    Teachers of statistics (especially those, such as myself, who come from backgrounds other than statistics)
    Undergraduates and early graduate students in statistics
    Users of statistics (especially people who read research using statistics)

Last updated May 13, 2014