Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

© 2020 SPIE. Ureteroscopy is a conventional procedure used for localization and removal of kidney stones. Laser is commonly used to fragment the stones until they are small enough to be removed. Often, the surgical team faces tremendous challenge to successfully perform this task, mainly due to poor image quality, presence of floating debris and occlusions in the endoscopy video. Automated localization and segmentation can help to perform stone fragmentation efficiently. However, the automatic segmentation of kidney stones is a complex and challenging procedure due to stone heterogeneity in terms of shape, size, texture, color and position. In addition, dynamic background, motion blur, local deformations, occlusions, varying illumination conditions and visual clutter from the stone debris make the segmentation task even more challenging. In this paper, we present a novel illumination invariant optical flow based segmentation technique. We introduce a multi-frame based dense optical flow estimation in a primal-dual optimization framework embedded with a robust data-term based on normalized correlation transform descriptors. The proposed technique leverages the motion fields between multiple frames reducing the effect of blur, deformations, occlusions and debris; and the proposed descriptor makes the method robust to illumination changes and dynamic background. Both qualitative and quantitative evaluations show the efficacy of the proposed method on ureteroscopy data. Our algorithm shows an improvement of 5-8% over all evaluation metrics as compared to the previous method. Our multi-frame strategy outperforms classically used two-frame model.

Original publication




Conference paper

Publication Date