Biology 300
Readings in Zar (1999) Biostatistical Analysis, 4th edition (and other resources)
- Introduction: Cats, statistics, and other stories.
- Data description: Kinds of data, frequency tables,
histograms, cumulative frequency distributions, mean, median,
mode, range, standard deviation, variance.
- Chapter 1 (all)
- Chapter 3 intro, sections 3.1, 3.2 (ignore interpolation for tied observations), 3.4, 3.5
- Chapter 4 intro, sections 4.1, 4.4, 4.5,4.6
- Introduction to estimation: Samples and populations,
population and sample histograms, distribution of sample means,
standard error of the sample mean.
- Chapter 2 (all)
- Chapter 6 section 6.3 (ignore the normal distribution for now)
- Introduction to hypothesis testing: Null vs alternate,
statistical significance, statistical errors.
- Chapter 6 section 6.4 (to end of page 83 only)
- Notes on the correct interpretation of the P-value from the Prism Guide
- Probability: Basic rules, mutually exclusive events,
independence, probability trees.
- Chapter 5 section 5.5, 5.6, 5.7
- Zar doesn't explain probability trees but see the trees in
Term 1 online notes
- The Binomial distribution: Probabilities, estimating a binomial proportion, binomial test.
- Testing goodness of fit: Goodness of fit tests, the chi-square
distribution, goodness of fit to binomial distribution, log-likelihood ratio (G).
- Chapter 22 section 22.1, 22.2, 22.4, 22.5, 22.7
- Chapter 24 section 24.5
- Our rule of thumb will be: no expected frequencies less than 1 and no more than 20% less than 5
- The Poisson distribution: Probabilities, goodness of fit to Poisson distribution.
- Contingency tables: Tests of independence of two nominal variables,
2 × 2 tables, Fisher's exact test, chi-square test, general r × c tables, log-likelihood ratio.
- Chapter 23 intro, section 23.1, 23.2, 23.7
- Chapter 24 section 24.10 (don't worry about computing, just know what Fisher exact test is used for)
- The Normal distribution: Probabilities under the normal
curve, normal approximation to the binomial distribution.
- Chapter 6 intro, section 6.1, 6.2, 6.3 (again)
- Chapter 24 section 24.6 (normal approximation to binomial test only)
- One-sample inference (normal case): Distribution of sample
means, confidence intervals for the population mean, one- and two-tailed
hypotheses, the t-distribution, distribution of sample variance,
testing departures from normality.
- Chapter 7 section 7.1, 7.2, 7.3, 7.4, 7.10, 7.11
- Chapter 6 section 6.5
- See the Prism Guide for a clear interpretation of the confidence interval for the mean.
- See the Rice Virtual Lab for a Java tool to simulate the confidence interval for a population mean.
- One-sample inference (non-normal case): Central Limit
Theorem, normal approximation.
- Zar does not have much on the Central Limit Theorem. The HyperStat Online Textbook
has a reasonably good explanation. The Rice Virtual Lab
has tools to simulate the Central Limit Theorem.
- Zar uses a different formula to approximate a 95% confidence interval for a
binomial proportion. You only need to know the formula presented in lecture (and
provided on the formula sheet).
- Two-sample inference: Confidence limits and tests for the
difference between two variances, and two means, normal approximation,
nonparametric alternatives (Mann-Whitney U-test).
- Chapter 8 intro, section 8.5, 8.6, 8.1, 8.2, 8.9, 8.10 (to top of page 153 only), 8.11
- Paired samples: Paired t-test, confidence limits for
mean difference, nonparametric alternatives (Wilcoxon paired-sample
test).
- Chapter 9 intro, section 9.1, 9.2, 9.5
- Notes on tests of significance: Significance vs importance,
one vs two-tailed tests, testing many hypotheses).
- Experimental design: Treatments and controls, experimental
vs observational studies, beware the sample of convenience).
- Introduction to multisample inference: The analysis of
variance (ANOVA) for comparison of several treatment means, one-way
ANOVA, random and fixed effects, homogeneity of variances, nonparametric
alternatives (Kruskall-Wallis test)).
- Chapter 10 intro, section 10.1, 10.2, 10.4
- Multiple comparisons: The a posteriori comparison
of means, Tukey test, Newman-Keuls test).
- Chapter 11 intro, section 11.1, 11.2
- Data transformations: Log, arcsine square root, and square
root).
- Chapter 13 intro, section 13.1, 13.2, 13.3
- Linear regression: Bivariate data, scatterplots, dependent
and independent variables, estimation and tests for slope and
intercept, predicting values of Y, data transformation,
comparing two slopes).
- Chapter 17 intro, section 17.1, 17.2, 17.3, 17.4, 17.5 (first formula only; this
formula makes sense only for fixed effects), 17.10,
- We will additionally cover the topic of inverse prediction in random effects models (not in Zar).
- The HyperStat Online Textbook has a section on the "regression effect" (regression toward the mean).
- Correlation: Linear correlation coefficient, confidence
intervals and tests, comparing correlation coefficients, nonparametric
alternatives (rank correlation)), species are not independent observations.
- Chapter 19 intro, 19.1, 19.2, 19.9
- Introduction to experiments with more than one factor
- Chapter 12 intro, section 12.1 (ignore formula details), figure 12.2