发布于 2019-09-26 作者 风铃 2762次 浏览 版块 前端
2.1 Introduction to Word Embeddings
2.1.1 Word Representation
Featurized representation: word embedding
use an n-dimensional vector to represent one word
2.1.2 Using word embeddings
Transfer learning and word embeddings
2.1.3 Properties of word embeddings
Cosine similarity
2.1.4 Embedding matrix
2.2 Learning Word Embeddings: Word2VEC & Glove
2.2.1 learn word embeddings
2.2.2 Word2Vec
2.2.3 GloVe word vectors
2.3 Applications using Word Embeddings
2.3.1 Sentiment Classification
RNN for sentiment classification
2.3.2 Debiasing word embeddings
bias problem
Q&A
5.A
E∗ e is computationally wasteful.
E
Suppose you learn a word embedding for a vocabulary of 10000 words. Then the embedding vectors
should be 10000 dimensional, so as to capture the full range of variation and meaning in those
words.
True
False
What is t-SNE?
A linear transformation that allows us to solve analogies on word vectors
A non-linear dimensionality reduction technique
A supervised learning algorithm for learning word embeddings
An open-source sequence modeling library
Suppose you download a pre-trained word embedding which has been trained on a huge corpus of text.
You then use this word embedding to train an RNN for a language task of recognizing if someone is
happy from a short snippet of text, using a small training set.
x (input text) | y (happy?) |
---|---|
I'm feeling wonderful today! | 1 |
I'm bummed my cat is ill. | 0 |
Really enjoying this! | 1 |
Then even if the word “ecstatic” does not appear in your small training set, your RNN might
reasonably be expected to recognize “I’m ecstatic” as deserving a label y=1>.
True
False
Which of these equations do you think should hold for a good word embedding? (Check all that apply)
e
boy−
e
girl
≈
e
brother
−
e
sister
e
boy
−
e
girl
≈
e
sister
−
e brother
e boy − e brother ≈ e girl
− e sister
eboy−ebrother≈esister−egirl
Recall the logic of analogies! The order of the words matter.
Let E be an embedding
matrix, and let
e1234
be a one-hot vector corresponding to word 1234. Then to get the embedding of word 1234, why
don’t we
call E∗e1234 in Python?
It is computationally wasteful.
The correct formula is
ET∗
e
1234
.
This doesn’t handle unknown words (<UNK>).
None of the above: Calling the Python snippet as described above is fine.
When learning word embeddings, we create an artificial task of estimating
P(target∣context) . It is okay if we do poorly on this artificial prediction task; the
more
important by-product of this task is that we learn a useful set of word embeddings.
True
False
In the word2vec algorithm, you estimate P(t∣c), where
t is the target word
and c is a context
word. How are t and
c chosen from the
training set? Pick the best answer.
c is the
one word that comes immediately before
t.
c is a
sequence of several words immediately before
t.
c and
t are
chosen to be nearby words.
c is the
sequence of all the words in the sentence before
t.
Suppose you have a 10000 word vocabulary, and are learning 500-dimensional word embeddings. The
word2vec model uses the following softmax function:
P(t∣c)=eθTtec∑10000t′=1eθTt′ec
Which of these statements are correct? Check all that apply.
θt
and
ec
are both 500 dimensional vectors.
θt
and
ec
are both 10000 dimensional vectors.
θt
and
ec
are both trained with an optimization algorithm such as Adam or gradient descent.
After training, we should expect
θt
to be very close to
ec
when t
and c are the same word.
Suppose you have a 10000 word vocabulary, and are learning 500-dimensional word embeddings.The GloVe
model minimizes this objective:
min
∑
10,000i=1∑
10,000j=1
f(
X
ij
)(
θTiej+
bi+
b′j−log
X
ij)
2
Which of these statements are correct? Check all that apply.
θi and ej should be initialized to 0 at the beginning of
training.
θi and ej should be initialized randomly at the beginning of training.
Xij is the number of times word i appears in the context of word j.
The weighting function f(.) must satisfy
f(0)=0 .
The weighting function helps prevent learning only from extremely common word pairs. It is not
necessary that it satisfies this function.
You have trained word embeddings using a text dataset of
m1
words. You are considering using these word embeddings for a language task, for which you have a
separate labeled dataset of
m2
words. Keeping in mind that using word embeddings is a form of transfer learning, under which of these
circumstance would you expect the word embeddings to be helpful?
m1 >>
m2
m1 <<
m2