The availability of high-throughput genotyping technologies and massive amounts of marker data for testing genetic associations is a dual-edged sword. On one side is the possibility that the causative gene (or a closely linked one) will be found from among those tested for association, but on the other testing, many loci for association creates potential false positive results and the need to accommodate the multiple testing problem. Traditional solutions involve correcting each test using adjustments such as a Bonferroni correction. This worked well in settings involving a few tests (e.g., 10-20, as is typical for candidate gene studies) and even when the number of tests was somewhat larger (e.g., a few hundred as in genome-wide microsatellite scans). However, the current dense single nucleotide polymorphism (SNP) and/or whole-genome association (WGA) studies often consider several thousand to upwards of 500,000 and 1 million SNPs. In these settings, a Bonferroni correction is not practical as it does not take into account correlations between the tests due to linkage disequilibrium and hence can be too conservative. The effect sizes of susceptibility alleles will rarely (if ever) reach the required level of significance in WGA studies if a Bonferroni correction is used, and the number of false negatives is likely to be large. Thus, one of the burning methodological issues in contemporary genetic epidemiology and statistical genetics is how to balance false positives and false negatives in large-scale association studies. This chapter reviews developments in this area from both historical and current perspectives.