A recurrent neural network (RNN) is a type of neural network which uses sequence data or time series data. The major difference between a RNN and other networks is the capability of storing information derived from previous outputs.
The most simple example of a RNN consists of just one single neuron. The neuron receives an input in addition with the previous output, has a hidden state and delivers an output. The hidden state represents the neurons memory and got calculated by the previous time steps.
Represented through time, the above simple RNN can be unrolled. Alongside the horizontal axes you can follow the state of the neuron per time step. At each time step, this neuron receives the inputs as well as its own output from the previous time step.
Training of a RNN uses backpropagation through time (BPTT). In a first feed forward process as on the unrolled RNN visualization pointed out, the final outputs will be calculated. Afterwards the gradients will be propagated backwards through the network. The computed gradients will than be used to update the model parameters.