Missing information is ubiquitous in relational datasets. Imputation of missing relations, a.k.a. link prediction, has become an increasingly crucial problem in relational data analysis as a huge amount of data has been accumulated in various fields. Recent advances in the latent variable models have greatly improved the state-of-the-art in the link prediction accuracy, however it comes at the price of increasing complexity. In this paper, we propose a novel link prediction algorithm, marginalized denoising model (MDM), where the problem of predicting unobserved or missing links in a given relational matrix is cast as a problem of matrix denoising. The method learns a mapping function that models the embedded topological structures of the relational network by capturing the so-called indirect affinities among entities. We train the mapping function by recovering the originally observed matrix from a conceptually "infinite" number of corrupted matrices where some links are randomly masked from the observed matrix. By re-applying the learned function to the observed relational matrix, we aim to "denoise" the observed matrix and thus to recover the unobserved links. Experimental results on several benchmarks demonstrate the superior performance of the new method over several stateof-the-art link prediction methods.