Accurate segmentation of target tumor is a precondition for effective radiation therapy. While hybrid positron emission tomography-computed tomography (PET-CT) has become a standard imaging tool in the practical process of radiation oncology, many existing segmentation methods are still performed in mono-modalities. We propose an automatic 3-D method based on unsupervised learning to jointly delineate tumor contours in PET-CT images, considering that the two distinct modalities can provide each other complementary information so as to improve segmentation. As PET-CT images are noisy and blurry, the theory of belief functions is adopted to model the uncertain and imprecise image information, and to fuse them in a stable way. To ensure reliable clustering in each modality, an adaptive distance metric to quantify distortions is proposed, and the spatial information is taken into account. A novel context term is designed to encourage consistent segmentation between the two modalities. In addition, during the iterative process of unsupervised learning, a specific fusion strategy is applied to further adjust results for the two distinct modalities. The proposed co-segmentation method has been evaluated by fifteen PET-CT images for non-small cell lung cancer (NSCLC) patients, showing good performance compared to some other methods.