This paper provides a method for dynamic simulation and target vehicle detection of the CubeSat spacecraft in the Low Earth Orbit. The technique for orbit prediction is verified against the Clohessy-Wiltshire solution while allowing for delta-V adjustments. Attitude simulation is performed using the Modified-Rodriguez Parameters variation of the Euler’s equation for rigid body motion. An attitude kinematic-driver was developed to target the chaser vehicle camera towards the client CubeSat. Camera images are generated using rendering software with specified target and chaser spacecraft predicted motion. Spacecraft detection is performed using two state-of-the-art deep learning Convolutional Neural Networks, namely, the 101-layer ResNet and the Inception-ResNet-V2 classifiers with Faster-R-CNN as the object detection engine. Results show, after applying transfer learning, both classification networks work well for simulation and laboratory experiment images.

Additional Metadata
Persistent URL dx.doi.org/10.2514/6.2018-1604
Conference AIAA Guidance, Navigation, and Control Conference, 2018
Citation
Shi, J.-F. (Jian-Feng), Ulrich, S, & Ruel, S. (Stéphane). (2018). Cubesat simulation and detection using monocular camera images and convolutional neural networks. In AIAA Guidance, Navigation, and Control Conference, 2018. doi:10.2514/6.2018-1604