Page:The World Within Wikipedia: An Ecology of Mind.pdf/12

This page needs to be proofread.
Information 2012, 3
240

this improvement is preserved across most semantic categories, the regression shows equal contribution by the constituent models, and the oracle shows the upside potential of these models is consistent with or perhaps slightly better than the previous best. All of these results support the conclusion that each constituent model’s semantic level, i.e., word-word, word-concept, and concept-concept, contributes positively to increasing the correlation with human semantic comparisons.


Our final analysis tests whether an artifact of the similarity scores might be responsible for this difference. It has been previously noted that models can perform better on WordSimilarity-353 when word pairs that lack a semantic representation are removed from the model[1]. Because of missing representations, these defective word pairs always have a zero similarity score (the default score). By averaging the three constituent model scores, the W3C3 model removes this deficiency: At least one of the models is likely to have a representation or otherwise produce a non-zero score. For example, due to missing or extremely sparse semantic representations for WordSimilarity-353 words, WLM yielded 73 zero relatedness scores, ESA yielded 81, and COALS yielded 0. Thus one explanation for the improved correlation of the W3C3 model over the individual models is that the W3C3 model minimizes the effect of missing/sparse word representations.


To explore the effect of zero relatedness scores on the performance of the individual and W3C3 models, we created a subset of WordSimilarity-353 for which none of the three models had a zero relatedness score, consisting of 226 word pairs. For the three individual models, higher correlations on this subset than on the whole set would support the missing/sparse representation explanation. Similarly, for the W3C3 model, a similar correlation to the other three models (with zeros removed) would also support the missing/sparse representation explanation. Correlations for all models on this subset are presented in Table 7.


Table 7. Correlations with WordSimilarity-353 without missing word representations.


Model Correlation
W3C3 0.72
COALS 0.64
ESA 0.56
WLM 0.60


The pattern of correlations in Table 7 do not support the missing/sparse representation explanation. First, each model’s correlation is lower than its counterpart on the whole data set given in Table 3, indicating that eliminating pairs with zero scores does not improve the performance of the individual models. Secondly, the W3C3 model in Table 7 has a higher correlation than any of the individual models by 0.08 or more, which is similar to the pattern on the whole dataset in Table 3, though all correlations were lower on this subset of data. Thus, the W3C3 model yields an improvement in correlation regardless of whether the words with missing/sparse representations are removed.

  1. Agirre, E.; Alfonseca, E.; Hall, K.; Kravalova, J.; Pas¸ca, M.; Soroa, A. A Study on Similarity and Relatedness Using Distributional and WordNet-Based Approaches. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics ( NAACL ’09), Association for Computational Linguistics: Stroudsburg, PA, USA, 2009; pp. 19–27.