Please use this identifier to cite or link to this item: https://hdl.handle.net/2440/28910
Type: Conference paper
Title: An empirical evaluation of models of text document similarity
Author: Lee, M.
Pincombe, B.
Welsh, M.
Citation: XXVII Annual Conference of the Cognitive Science Society / B. G. Bara, L. Barsalou and M. Bucciarelli (eds.), pp. 1254-1259
Publisher: Cognitive Science Society
Publisher Place: CD ROM
Issue Date: 2005
ISBN: 0976831813
Conference Name: Cognitive Science Society. Annual Conference (27th : 2005 : Stresa, Italy)
Editor: Bara, B.
Barsalou, L.
Bucciarelli, M.
Abstract: Modeling the semantic similarity between text documents presents a significant theoretical challenge for cognitive science, with ready-made applications in information handling and decision support systems dealing with text. While a number of candidate models exist, they have generally not been assessed in terms of their ability to emulate human judgments of similarity. To address this problem, we conducted an experiment that collected repeated similarity measures for each pair of documents in a small corpus of short news documents. An analysis of human performance showed inter-rater correlations of about 0.6. We then considered the ability of existing models—using wordbased, n-gram and Latent Semantic Analysis (LSA) approaches—to model these human judgments. The best performed LSA model produced correlations of about 0.6, consistent with human performance, while the best performed word-based and n-gram models achieved correlations closer to 0.5. Many of the remaining models showed almost no correlation with human performance. Based on our results, we provide some discussion of the key strengths and weaknesses of the models we examined.
Rights: © the authors
Appears in Collections:Aurora harvest 2
Australian School of Petroleum publications
Environment Institute publications

Files in This Item:
File Description SizeFormat 
hdl_28910.pdfPublished version687.02 kBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.