We present an AI called Visuo that guesses quantitative visuospatial magnitudes (e.g., heights, lengths) given adjective-noun pairs as input (e.g., "big hat"). It uses a database of tagged images as memory and infers unexperienced magnitudes by analogy with semantically-related concepts in memory. We show that transferring width-height ratios from a semantically-related concept yields significantly lower error rates than using dissimilar concepts when predicting the width-height ratios of novel inputs.

Additional Metadata
Conference 2010 AAAI Workshop
Citation
Davies, J, & Gagné, J. (Jonathan). (2010). Estimating quantitative magnitudes using semantic similarity. In AAAI Workshop - Technical Report (pp. 14–19).