# Distributed representations of words and phrases and their compositionality

Posted on
deep-learning nlp

This post was created as an assignment in Bang Liu’s IFT6289 course in winter 2022. The structure of the post follows the structure of the assignment: summarization followed by my own comments.

This paper describes multiple improvements that are made to the original Skip-gram model:

1. Decreasing the rate of exposure to common words improves the training speed and increases the model’s accuracy on infrequent words.
2. A new training target they call “negative sampling” improves the training speed and the model’s accuracy on frequent words.
3. Allowing the model to use phrase vectors improves the expressivity of the model.

The original Skip-gram model computed probabilities using a hierarchical softmax, which allowed the model to compute only $$O(\log_2(|V|))$$ probabilities when estimating the probability of a particular word, rather than $$O(|V|)$$. Negative sampling, on the other hand, deals directly with the generated vector representations. The negative sampling loss function basically tries to maximize cosine similarity between the input representation of the input word with the output representation of the neighboring word, while decreasing cosine similarity between the input word and a few random vectors. They find that the required number of negative examples decreases as the dataset size increases.