Supervised image segmentation methods usually start with information extracted from the learning phase to separate an image into non-overlapping regions. We have used user input information or seeds in our previous work to segment partially overlapped translucent regions. However providing a lot of seeds might sometimes be too time consuming that might make the method perform poorly or not work at all. Machine learning algorithms consist of two major phases: learning phase where the information will be generated based on the data, and test phase where the generated information will be used to improve the performance of the method. In previous work user guided labels were used as hard seeds in the RW algorithm. In this paper we extend our previous work to be able to segment multilabel translucent overlapped objects using soft seed information. We first map each segment as a class on a 25D manifold in the learning phase. Then the probability of assigning each of the image pixels to the segments, data term, is obtained by calculating the geodesic distance between the pixels’ features and these classes on the manifold. This data term is then used as soft seeds in the RW algorithm instead of user predefined labels. Experimental results on synthetic images show the strength of our proposed method comparing to our previous algorithm with more than 95% segmentation accuracy.

Additional Metadata
Keywords Geodesic distance, Image segmentation, Manifold, Random walks, Translucent overlapped images
Persistent URL dx.doi.org/10.1109/TSP.2017.8076059
Conference 40th International Conference on Telecommunications and Signal Processing, TSP 2017
Citation
Mahyari, T.L. (Tayebeh Lotfi), & Dansereau, R. (2017). Learning-based multilabel random walks for image segmentation containing translucent overlapped objects. In 2017 40th International Conference on Telecommunications and Signal Processing, TSP 2017 (pp. 610–614). doi:10.1109/TSP.2017.8076059