Google Street View and the emergence of self-driving vehicles afford an unprecedented capacity to observe our planet. Fused with dramatic advances in artificial intelligence, the capability to extract patterns and meaning from those data streams heralds an era of insights into the physical world. In order to draw appropriate inferences about and between environments, the systematic selection of these data is necessary to create representative and unbiased samples. To this end, we introduce the Systematic Street View Sampler (S3) framework, enabling researchers to produce their own user-defined datasets of Street View imagery. We describe the algorithm and express its asymptotic complexity in relation to a new limiting computational resource (Google API Call Count). Using the Amazon Mechanical Turk distributed annotation environment, we demonstrate the utility of S3 in generating high quality representative datasets useful for machine vision applications. The S3 algorithm is open-source and available at github.com/CU-BIC/S3 along with the high quality dataset representing power infrastructure in rural regions of southern Ontario, Canada.

Additional Metadata
Keywords Computational Complexity, Image Classification, Machine Vision, Open Source Software, Remote Sensing, Sampling Methods
Persistent URL dx.doi.org/10.1109/CRV.2018.00028
Conference 15th Conference on Computer and Robot Vision, CRV 2018
Citation
Dick, K. (Kevin), Charih, F. (Francois), Souley Dosso, Y. (Yasmina), Russell, L. (Luke), & Green, J. (2018). Systematic street view sampling: High quality annotation of power infrastructure in rural Ontario. In Proceedings - 2018 15th Conference on Computer and Robot Vision, CRV 2018 (pp. 134–141). doi:10.1109/CRV.2018.00028