In radiotherapy treatment planning of cancer patients, the collection of multiple images of different and yet complementary information is rapidly becoming the norm. Beside CT data sets, PET and/or MRI or MRS images are also being used to aid in the definition of the target volume for treatment optimization. We are investigating methods to integrate available information for joint target registration and segmentation of multi-modality images as perceived by the human observer. Towards this goal, we are exploring multi-valued level set deformable models in conjunction with human perception models for simultaneous delineation of multi-modality images consisting of combinations of PET, CT, or MR datasets. Information from multimodality image sets is integrated based on a logical model to define the final target volume. The methods were demonstrated qualitatively on patient cases of lung cancer with PET/CT and a prostate patient case with CT and MR. We used a series of phantom data of CT, PET, and MR for quantification analysis. Phantom studies suggest 90% segmentation accuracy and less than 2% volume error when integrating all of the three modalities. This is compared with 74% accuracy and 4.4% volume error when using CT-based systems. These results indicate that this semi-automated multimodality-based definition of the biophysical target would provide a feasible and accurate framework for integrating complementary imaging information from different modalities and potentially a useful tool for optimizing of cancer patients radiotherapy plans.