Abstract

As the preceding chapters illustrate, now that whole-genome scan analyses are becoming more common, there is considerable disagreement about the best way to balance between false positives and false negatives (traditionally called type I and type II errors in the statistical parlance). Type I and type II errors can be simultaneously controlled, if we are willing to let the sample size of analysis vary. This is the secret that Wald 1947 discovered in the 1940s that led to the theory of sequential sampling and was the inspiration for Newton Morton in developing the lod score method. We can exploit this idea further and capitalize on an old, but nearly forgotten theory: sequential multiple decision procedures (SMDP) Bechhoffer, et al. 1968, which generalizes the standard "two-hypotheses" tests to consider multiple alternative hypotheses. Using this theory, we can develop a single, genome-wide test that simultaneously partitions all markers into "signal" and "noise" groups, with tight control over both type I and type II errors (Province, 2000). Conceiving this approach as an analysis tool for fixed sample design (instead of a true sequential sampling scheme), we can let the data decide at which point we should move from the hypothesis generation phase of a genome scan (where multiple comparisons make the interpretation of p values and significance levels difficult and controversial), to a true hypothesis-testing phase (where the problem of multiple comparison of multiple comparison has been all but eliminated so that p values may be accepted at face value.

Original languageEnglish
Title of host publicationAdvances in Genetics
PublisherAcademic Press Inc.
Pages499-514
Number of pages16
ISBN (Print)0120176424, 9780120176427
DOIs
StatePublished - 2001

Publication series

NameAdvances in Genetics
Volume42
ISSN (Print)0065-2660

Fingerprint

Dive into the research topics of '30 Sequential methods of analysis for genome scans'. Together they form a unique fingerprint.

Cite this