Context Model Acquisition from Spoken Utterances (Journal Article)
Authors: Weigelt, Sebastian, Hey, Tobias and Tichy, Walter F.
Journal: International Journal of Software Engineering and Knowledge Engineering
Abstract: Current systems with spoken language interfaces do not leverage contextual information. Therefore, they struggle with understanding speakers’ intentions. We propose a system that creates a context model from user utterances to overcome this lack of information. It comprises eight types of contextual information organized in three layers: individual, conceptual, and hierarchical. We have implemented our approach as a part of the project PARSE. It aims at enabling laypersons to construct simple programs by dialog. Our implementation incrementally generates context including occurring entities and actions as well as their conceptualizations, state transitions, and other types of contextual information. Its analyses are knowledge- or rulebased (depending on the context type), but we make use of many well-known probabilistic NLP techniques. In a user study we have shown the feasibility of our approach, achieving F1 scores from 72% up to 98% depending on the type of contextual information. The context model enables us to resolve complex identity relations. However, quantifying this effect is subject to future work. Likewise, we plan to investigate whether our context model is useful for other language understanding tasks, e.g., anaphora resolution, topic analysis, or correction of automatic speech recognition errors.
@article{doi:10.1142/S0218194017400058,
author = {Weigelt, Sebastian and Hey, Tobias and Tichy, Walter F.},
title = {Context Model Acquisition from Spoken Utterances},
journal = {International Journal of Software Engineering and Knowledge Engineering},
volume = {27},
number = {09n10},
pages = {1439-1453},
year = {2017},
doi = {10.1142/S0218194017400058},
URL = {https://doi.org/10.1142/S0218194017400058},
eprint = {https://doi.org/10.1142/S0218194017400058},
abstract = {Current systems with spoken language interfaces do not leverage contextual information. Therefore, they struggle with understanding speakers’ intentions. We propose a system that creates a context model from user utterances to overcome this lack of information. It comprises eight types of contextual information organized in three layers: individual, conceptual, and hierarchical. We have implemented our approach as a part of the project PARSE. It aims at enabling laypersons to construct simple programs by dialog. Our implementation incrementally generates context including occurring entities and actions as well as their conceptualizations, state transitions, and other types of contextual information. Its analyses are knowledge- or rule-based (depending on the context type), but we make use of many well-known probabilistic NLP techniques. In a user study we have shown the feasibility of our approach, achieving F1 scores from 72\% up to 98\% depending on the type of contextual information. The context model enables us to resolve complex identity relations. However, quantifying this effect is subject to future work. Likewise, we plan to investigate whether our context model is useful for other language understanding tasks, e.g. anaphora resolution, topic analysis, or correction of automatic speech recognition errors. }
}