Co-occurrence retrieval: a flexible framework for lexical distributional similarity. (English) Zbl 1234.68436
Summary: Techniques that exploit knowledge of distributional similarity between words have been proposed in many areas of natural language processing. For example, in language modeling, the sparse data problem can be alleviated by estimating the probabilities of unseen co-occurrences of events from the probabilities of seen co-occurrences of similar events. In other applications, distributional similarity is taken to be an approximation to semantic similarity. However, due to the wide range of potential applications and the lack of a strict definition of the concept of distributional similarity, many methods of calculating distributional similarity have been proposed or adopted. In this work, a flexible, parameterized framework for calculating distributional similarity is proposed. Within this framework, the problem of finding distributionally similar words is cast as one of co-occurrence retrieval (CR) for which precision and recall can be measured by analogy with the way they are measured in document retrieval. As will be shown, a number of popular existing measures of distributional similarity are simulated with parameter settings within the CR framework. In this article, the CR framework is then used to systematically investigate three fundamental questions concerning distributional similarity. First, is the relationship of lexical similarity necessarily symmetric, or are there advantages to be gained from considering it as an asymmetric relationship? Second, are some co-occurrences inherently more salient than others in the calculation of distributional similarity? Third, is it necessary to consider the difference in the extent to which each word occurs in each co-occurrence type? Two application-based tasks are used for evaluation: automatic thesaurus generation and pseudo-disambiguation. It is possible to achieve significantly better results on both these tasks by varying the parameters within the CR framework rather than using other existing distributional similarity measures; it will also be shown that any single unparameterized measure is unlikely to be able to do better on both tasks. This is due to an inherent asymmetry in lexical substitutability and therefore also in lexical distributional similarity.
MSC:
68T50 | Natural language processing |
Keywords:
natural language processing; knowledge of distributional similarity; co-occurrence retrieval (CR)References:
[1] | Brown Peter F, Computational Linguistics 18 (4) pp 467– (1992) |
[2] | DOI: 10.1162/089120102760173643 · Zbl 1232.68157 · doi:10.1162/089120102760173643 |
[3] | DOI: 10.1023/A:1007537716579 · Zbl 0928.68111 · doi:10.1023/A:1007537716579 |
[4] | DOI: 10.1023/A:1007974605290 · doi:10.1023/A:1007974605290 |
[5] | Golding Andrew R, Machine Learning 34 (1) pp 182– (1999) |
[6] | Hindle Donald, Computational Linguistics 19 (1) pp 103– (1993) |
[7] | DOI: 10.1214/aoms/1177729694 · Zbl 0042.38403 · doi:10.1214/aoms/1177729694 |
[8] | DOI: 10.1017/S1351324902002838 · doi:10.1017/S1351324902002838 |
[9] | Li Hang, Computational Linguistics 24 (2) pp 217– (1998) |
[10] | Schütze Hinrich, Computational Linguistics 24 (1) pp 97– (1998) |
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.