Publications of the ERIC lab
|Improving Topic Evaluation Using Conceptual Knowledge. |
Author(s): Musat Claudiu, VELCIN J., Trausan-Matu Stefan, RIZOIU M.-A.
Proceedings: Conference: 22nd International Joint Conference on Artificial Intelligence (IJCAI) (, ES, 2011-07-20) Published: Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI), vol. (2011) p.1866-1871
Ref HAL: hal-00616245_v1
Abstract: The growing number of statistical topic models led to the need to better evaluate their output. Traditional evaluation means estimate the model's fitness to unseen data. It has recently been proven than the output of human judgment can greatly differ from these measures. Thus the need for methods that better emulate human judgment is crucial. In this paper we present a system that computes the utility of individual topics from a given model on the basis of information drawn from a given ontology, in this case WordNet. The notion of utility is regarded as the ability to attribute concepts to each topic and separate words related to the topic from the unrelated ones based on that concept. In multiple experiments we prove the correlation between the automatic evaluation method and the answers received from human evaluators, for two different corpora and difficulty levels. By changing the evaluation focus from a statistical one to a conceptual one we were able to detect which topics are conceptually meaningful and rank them accordingly.