The problem of adaptively setting the timeout interval for retransmitting a packet is discussed, and a layered view of the algorithms is presented. It is shown that a timeout algorithm consists of essentially five layers or procedures which can be independently chosen and modified. A number of timeout algorithms proposed in the literature have been decomposed into these five layers. One of the key layers not discussed in the literature is that of determining the sample round-trip delay for packets that have been transmitted more than once. It is shown that this layer has a significant impact on network performance. Under repeated packet loss, most timeout algorithms either diverge or converge to a wrong value. A number of alternative schemes are presented. It is argued that divergence is preferable to false convergence and is helpful in reducing network traffic during congestion.