Unsupervised Multi-Topic Labeling for Spoken Utterances
Authors: Weigelt, Sebastian, Keim, Jan, Hey, Tobias and Tichy, Walter F.
Conference: 2019 IEEE International Conference on Humanized Computing and Communication (HCC)
Abstract: Systems such as Alexa, Cortana, and Siri appear rather smart. However, they only react to predefined wordings and do not actually grasp the user's intent. To overcome this limitation, a system must grasp the topics the user is talking about. Therefore, we apply unsupervised multi-topic labeling to spoken utterances. Although topic labeling is a well-studied task on textual documents, its potential for spoken input is almost unexplored. Our approach for topic labeling is tailored to spoken utterances; it copes with short and ungrammatical input. The approach is two-tiered. First, we disambiguate word senses. We utilize Wikipedia as pre-labeled corpus to train a naïve-bayes classifier. Second, we build topic graphs based on DBpedia relations. We use two strategies to determine central terms in the graphs, i.e. the shared topics. One focuses on the dominant senses in the utterance and the other covers as many distinct senses as possible. Our approach creates multiple distinct topics per utterance and ranks results. The evaluation shows that the approach is feasible; the word sense disambiguation achieves a recall of 0.799. Concerning topic labeling, in a user study subjects assessed that in 90.9% of the cases at least one proposed topic label among the first four is a good fit. With regard to precision, the subjects judged that 77.2% of the top ranked labels are a good fit or good but somewhat too broad (Fleiss' kappa k = 0.27).
@INPROCEEDINGS{8940820,
author={S. {Weigelt} and J. {Keim} and T. {Hey} and W. F. {Tichy}},
booktitle={2019 IEEE International Conference on Humanized Computing and Communication (HCC)},
title={Unsupervised Multi-Topic Labeling for Spoken Utterances},
year={2019},
volume={},
number={},
pages={38-45},
abstract={Systems such as Alexa, Cortana, and Siri appear rather smart. However, they only react to predefined wordings and do not actually grasp the user's intent. To overcome this limitation, a system must grasp the topics the user is talking about. Therefore, we apply unsupervised multi-topic labeling to spoken utterances. Although topic labeling is a well-studied task on textual documents, its potential for spoken input is almost unexplored. Our approach for topic labeling is tailored to spoken utterances; it copes with short and ungrammatical input. The approach is two-tiered. First, we disambiguate word senses. We utilize Wikipedia as pre-labeled corpus to train a naïve-bayes classifier. Second, we build topic graphs based on DBpedia relations. We use two strategies to determine central terms in the graphs, i.e. the shared topics. One focuses on the dominant senses in the utterance and the other covers as many distinct senses as possible. Our approach creates multiple distinct topics per utterance and ranks results. The evaluation shows that the approach is feasible; the word sense disambiguation achieves a recall of 0.799. Concerning topic labeling, in a user study subjects assessed that in 90.9% of the cases at least one proposed topic label among the first four is a good fit. With regard to precision, the subjects judged that 77.2% of the top ranked labels are a good fit or good but somewhat too broad (Fleiss' kappa k = 0.27).},
keywords={Bayes methods;graph theory;learning (artificial intelligence);natural language processing;pattern classification;text analysis;Web sites;naïve-bayes classifier;Wikipedia;word sense disambiguation;multiple distinct topics;topic graphs;topic labeling;spoken utterances;unsupervised multitopic labeling;Topic Labeling, Topic Modeling, Unsupervised Machine Learning, Graph Centrality Measures, Word Sense Disambiguation, DBpedia, Wikipedia, Semantic Annotation, Spoken Language Interfaces, Spoken Language Understanding, Natural Language Processing, Natural Language Understanding},
doi={10.1109/HCC46620.2019.00014},
ISSN={null},
month={Sep.},}