The present article explores two novel methods that integrate distributed representations with terminology extraction. Both methods assess the specificity of a word (unigram) to the target corpus by leveraging its distributed representation in the target domain as well as in the general domain. The first approach adopts this distributed specificity as a filter, and the second directly applies it to the corpus. The filter can be mounted on any other Automatic Terminology Extraction (ATE) method, allows merging any number of other ATE methods, and achieves remarkable results with minimal training. The direct approach does not perform as high as the filtering approach, but it reemphasizes that using distributed specificity as the words' representation, very little data is required to train an ATE classifier. This encourages more minimally supervised ATE algorithms in the future.

Additional Metadata
Keywords Automatic terminology extraction, Distributed specificity, Neural networks, Representation learning, Word embeddings
Persistent URL dx.doi.org/10.1075/term.00012.amj
Journal Terminology
Citation
Amjadian, E. (Ehsan), Inkpen, D. (Diana), Paribakht, T.S. (T. Sima), & Faez, F. (Farahnaz). (2018). Distributed specificity for automatic terminology extraction. Terminology (Vol. 24, pp. 23–40). doi:10.1075/term.00012.amj