The monitor logs the train loss after each epoch. However, it does not log the loss of the complete epoch, but instead the loss of the last sampled mini-batch. This way, the stochasticity of the shuffling of the dataset can make the resulting plots quite messy.
Train loss as it is implemented

Train loss of the same run, but taken on the whole train set

Most libraries that I know of, solve this problem without an increase in runtime in a way similat to changing self.current_loss = loss.item() with self.current_loss += loss.item()/n_batches and setting this value to zero at the beginning of each epoch.