Purpose: In FDG‐PET imaging, blurring due to breathing motion can significantly degrade the quality of the observed image, which can obscure the delineation of the tumor boundary. We demonstrate a new deblurring approach that combines patient‐specific motion estimates of tissue trajectories with image deconvolution techniques to partially remove breathing‐motion induced artifacts. Method and Materials: The human test data set consists of PET/CT co‐registered images of patients diagnosed with lung cancer. The motion measurements are obtained using a breathing model developed recently at our institution. The model linearly maps tidal volumes and airflow measu6red by spirometry into spatial trajectories. The parameters of the model are fitted using a template matching algorithm applied to CT data. The motion model is used to locally estimate the point spread function (PSF) due to breathing. The deconvolution process is carried by an expectation‐maximization (EM) iterative algorithm using the motion‐based PSF. Results: We evaluated the proposed motion‐based deblurring algorithm on idealized test data sets as well as two human PET images, one with a large and one with a small lung tumor. In this case, the estimated PSF would satisfy the shift‐invariance property. Rescaling was used to correct for limited image support in the case of small tumors, which could lead to undesirable ringing effects (Gibbs phenomena). The initial results showed improvement in the spatial resolution, especially in the cranio‐caudal direction. The EM algorithm converged within 20 iterations in both tumor cases. Generally, a compromise between entropy minimization and increase in MSE was used as a stopping criteria. Conclusion: We have implemented a method and algorithm for removing breathing motion from PET images. The initial results show that the method is promising for improving PET thoracic images.