You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I might be wrong, but I think the seq2seq model in the first notebook does not handle variable length sequences properly (this mistake probably carries over to the other notebooks as well). Specifically, for the encoder, we use the rnn to compute hidden, cell as the summary "context" of the input to initialize the hidden, cell states of the decoder. For a mini-batch, if T is the length of the longest sequence in the mini-batch, then we are running the LSTM in the encoder to compute hidden, cell T steps for all examples. However, the LSTM should be run T1, T2, ... for example 1, example 2 etc... (where T1 is the length of the 1st sequence, etc...).
I think as a simple fix you can use the pack_padded_sequence function in the forward method of the encoder (see below) which I believe computes the hidden/cell states in the fashion that I described. The data loader will also have to provide a tensor of sequence lengths for each example in the batch (see below). Some of the other functions (e.g. the training and eval function) and classes (the seq2seq class) will need to be slightly modified as well to accommodate taking in de_len as an input. I've implemented this and it trains fine for me
I might be wrong, but I think the seq2seq model in the first notebook does not handle variable length sequences properly (this mistake probably carries over to the other notebooks as well). Specifically, for the encoder, we use the rnn to compute hidden, cell as the summary "context" of the input to initialize the hidden, cell states of the decoder. For a mini-batch, if T is the length of the longest sequence in the mini-batch, then we are running the LSTM in the encoder to compute hidden, cell T steps for all examples. However, the LSTM should be run T1, T2, ... for example 1, example 2 etc... (where T1 is the length of the 1st sequence, etc...).
I think as a simple fix you can use the
pack_padded_sequence
function in the forward method of the encoder (see below) which I believe computes the hidden/cell states in the fashion that I described. The data loader will also have to provide a tensor of sequence lengths for each example in the batch (see below). Some of the other functions (e.g. the training and eval function) and classes (the seq2seq class) will need to be slightly modified as well to accommodate taking inde_len
as an input. I've implemented this and it trains fine for meThe text was updated successfully, but these errors were encountered: