This paper investigates the use of convolutional neural networks for the purpose of image foreground extraction from a dynamic environment. The proposed solution utilises the latest developments in image segmentation using pixel-wise classification to produce foreground target extraction for real-time operations. A collection of spacecraft images were assembled for network training and evaluation. The proposed technique takes advantage of transfer learning for the stable training of a convolutional neural network classifier. The image extraction software was applied to a thermal camera video, taken by an undocking spacecraft from the International Space Station. The results show the proposed deep learning-based image extraction has advantages over traditional background subtraction methods. This investigation provides evidence that semantic segmentation using convolutional neural network can be an effective tool for spacecraft image isolation and extraction from a dynamically cluttered scene.

Background subtraction, CNN, Semantic segmentation
dx.doi.org/10.11159/cdsr17.131
4th International Conference of Control, Dynamic Systems, and Robotics, CDSR 2017
Department of Mechanical and Aerospace Engineering

Shi, J.-F. (Jian-Feng), Ulrich, S, & Ruel, S. (Stephane). (2017). International space station image extraction from a dynamic environment using deep learning. In International Conference of Control, Dynamic Systems, and Robotics. doi:10.11159/cdsr17.131