Semi-supervised representation learning for domain adaptation using dynamic dependency networks
Recently, various unsupervised representation learning approaches have been investigated to produce augmenting features for natural language processing systems in the open-domain learning scenarios. In this paper, we propose a dynamic dependency network model to conduct semi-supervised representation learning. It exploits existing task-specific labels in the source domain in addition to the large amount of unlabeled data from both the source and target domains to produce informative features for NLP tasks. We empirically evaluate the proposed learning technique on the part-of-speech tagging task using Wall Street Journal and MEDLINE sentences and on the syntactic chunking task using Wall Street Journal corpus and Brown corpus. Our experimental results show that the proposed semi-supervised learning model can produce more effective features than unsupervised representation learning methods for opendomain part-of-speech taggers and syntactic chunkers.
|Keywords||Domain adaptation, POS tagging, Representation learning, Syntactic chunking|
|Conference||24th International Conference on Computational Linguistics, COLING 2012|
Xiao, M. (Min), Guo, Y, & Yates, A. (Alexander). (2012). Semi-supervised representation learning for domain adaptation using dynamic dependency networks. In 24th International Conference on Computational Linguistics - Proceedings of COLING 2012: Technical Papers (pp. 2867–2882).