In Recurrent Neural Networks, the sequence of inputs and gives a sequence of outputs, so-called Sequence-to-Sequence. For example, feed onion prices in the last 30 days and output the price tomorrow.

The Sequence-to-Sequence is very useful to predict time series, natural language processing, etc.

Feed sequence of inputs and take only last output, so it is called Sequence-to-Vector.  For example, take an IMDB review of a movie and ignore all outputs in the middle to predict the movie horror or drama.

Feed the same input vector multiple times at each time and produce an output, so it is called Vector-to-Sequence.  This is useful in Convolutional Neural Network(CNN) where we need to input an image and expect to predict what is that image.

Sequence-to-Vector network called an encoder and Vector-to-Sequence network called a decoder.

Google translation uses the Seq2Seq model, the model uses an encoder and decoder for translating words from one language to other languages.

Most of the Chatbot applications use the Seq2Seq model and we will see mode examples in future blogs.