diff --git a/_posts/2020-10-02-nlp-translation.md b/_posts/2020-10-02-nlp-translation.md index 78f7f16..2405544 100644 --- a/_posts/2020-10-02-nlp-translation.md +++ b/_posts/2020-10-02-nlp-translation.md @@ -142,16 +142,17 @@ Speedups are computed with respect to the 1 worker case, and are intended to ill The graphs below show the time speedups for the LSTM model and Transformer model (respectively). - + *GNMT Speedups* - ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_speedup.png) - + ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_speedups.png) + +
- + *Transformer Speedups* - ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_speedup.png) + ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_speedups.png) The left graph shows the absolute speed ups with respect to one worker, and the right one omits @@ -181,45 +182,49 @@ The next figures show the total time spent in each step of training. - The top left graph in each figure shows the total training time `total = compute + communication` -- Computation times are `compute = fwd + bwd + opt` -- Communication times are precisely measured to take only into account communication of tensors between workers. +- Computation times are `compute = forward + backward + optimization + loss computation + init + end` +- Communication are only `aggregation` steps, and are precisely measured to take only into account communication of tensors between workers. As expected, we can see that compute steps take less time as we increase the number of nodes, -while communication increasingly takes more and more time, following a sub-linear path. Interestingly, the Transformer model's communication times quickly reach a plateau -after 4 workers, while GNMT's communication times keeps increasing. This effect is probably due to larger values in the shared tensors. +while communication increasingly takes more and more time, following a sub-linear path. Looking at both graphs, +we can see that `aggregation` times increase, but slowly, and reach a plateau quite quickly: the time spent communicating to 8 and 16 workers doesn't differ much. + +Other compute steps follow the same pattern, but inversely: Fast decrease in the beginning, and slowly plateaus. The steps that benefit the most from distribution are +the backpropagation and computing the loss. This makes sense, as batches get smaller for each machine. -Time spent optimizing doesn’t seem to follow the same path, but increases are insignificant (~10 seconds), -and are due to additional compute steps (averaging tensors, computations related to Mixed precision) when using distribution. ### Performance comparison -Finally, the following figures show the loss evolution (left), Ratio of communication to total time (center), and a price index (right), - computed as follows $$ index = \frac{price\_increase}{performance\_increase} $$ +Finally, the following figures show the share of time spent for each step of training. The *Aggregation* step corresponds to the aggregation of weights between the workers, +and is the only step where communication happens. #### LSTM - - ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_loss_ratio_prices.png) - *Step times for GNMT* + + *Step shares for GNMT* + ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4a_step_shares.png) -Communication takes up a huge part of training as we increase distribution: over 70% of the time is spent sending tensors for 16 workers. +Communication takes up a huge part of training as we increase distribution: around 80% of the time is spent sending tensors for 16 workers ! This could be made faster by using a more appropriate connectivity between the workers (currently it is at 10GB/s) that can reduce times by a factor of 10 or more. -An interesting thing to observe is that the curve of cost index first decreases and has a valley before increasing again, which depicts the limits of distribution for this task. -The price to performance increase seems to be the best for 4 workers, but all indices are lower than 1, meaning the cost compromise is worth it for this task. + +We can clearly see the limits of the used hardware here: communication quickly becomes the bottleneck, as very large tensors are shared between increasing number of workers. +Here, *All Reduce* aggregation of gradients is performed before the optimization step, which yields a lot of exchanged messages. It would be interesting to see how the time spent communicating +tensors can be reduced by using a more advanced aggregation technique (e.g. sharing with neighbors in a pre-defined topology) #### Transformer - - ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_loss_ratio_prices.png) - *Step times for Transformer* + + *Step shares for Transformer* + ![test]({{ site.baseurl }}public/images/blog/2020-10-02-nlp-translation/task4b_step_shares.png) -Compared to the LSTM model, the communication time ratio is slightly lower, but follows a similar path. -For 8 workers, LSTM has a communication to total time of 57%, while Transformer 48%. -For 16 workers, LSTM increases to 75% (31% increase), and Transformer 67% (39% increase). -However, the price index has a different shape: -the observed valley is missing, and the indices are decreasing as we add workers. This suggests a very good performance increase, with a lower price increase. The best configuration -according to this index is with 8 workers, but the 16 worker case still has very impressive advantages. +Compared to the LSTM model, the communication time ratio follows a similar path. However, as this model does not use LSTM layers, overall time is lower. + +## Conclusion +Both models solve an identical task, with almost identical datasets, and similar training algorithm, but use very different models. It is hence interesting to see how both react to distribution. +These similar results show that both models benefit similarly from multiple workers, and are both very quickly bottlenecked by the communication hardware. Here, nodes communicate over a regular high speed +network, which mimics a real "distributed" training environment, where machines could be in different locations. Using direct, or higher performance communication between the nodes (e.g. NVLink, or Google's Virtual NIC) +we would observe speedups close to the compute speedups, so close to linear speedups for both models. ----- diff --git a/public/images/blog/2020-10-02-nlp-translation/task4a_loss_ratio_prices.png b/public/images/blog/2020-10-02-nlp-translation/task4a_loss_ratio_prices.png deleted file mode 100644 index 218d5f3..0000000 Binary files a/public/images/blog/2020-10-02-nlp-translation/task4a_loss_ratio_prices.png and /dev/null differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4a_speedup.png b/public/images/blog/2020-10-02-nlp-translation/task4a_speedup.png deleted file mode 100644 index 6688efe..0000000 Binary files a/public/images/blog/2020-10-02-nlp-translation/task4a_speedup.png and /dev/null differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4a_speedups.png b/public/images/blog/2020-10-02-nlp-translation/task4a_speedups.png new file mode 100644 index 0000000..b9a07b6 Binary files /dev/null and b/public/images/blog/2020-10-02-nlp-translation/task4a_speedups.png differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4a_step_shares.png b/public/images/blog/2020-10-02-nlp-translation/task4a_step_shares.png new file mode 100644 index 0000000..0a89a52 Binary files /dev/null and b/public/images/blog/2020-10-02-nlp-translation/task4a_step_shares.png differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4a_times.png b/public/images/blog/2020-10-02-nlp-translation/task4a_times.png index c041104..c85bfaf 100644 Binary files a/public/images/blog/2020-10-02-nlp-translation/task4a_times.png and b/public/images/blog/2020-10-02-nlp-translation/task4a_times.png differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4b_loss_ratio_prices.png b/public/images/blog/2020-10-02-nlp-translation/task4b_loss_ratio_prices.png deleted file mode 100644 index d718c14..0000000 Binary files a/public/images/blog/2020-10-02-nlp-translation/task4b_loss_ratio_prices.png and /dev/null differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4b_speedup.png b/public/images/blog/2020-10-02-nlp-translation/task4b_speedup.png deleted file mode 100644 index bb56391..0000000 Binary files a/public/images/blog/2020-10-02-nlp-translation/task4b_speedup.png and /dev/null differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4b_speedups.png b/public/images/blog/2020-10-02-nlp-translation/task4b_speedups.png new file mode 100644 index 0000000..dfd5176 Binary files /dev/null and b/public/images/blog/2020-10-02-nlp-translation/task4b_speedups.png differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4b_step_shares.png b/public/images/blog/2020-10-02-nlp-translation/task4b_step_shares.png new file mode 100644 index 0000000..41b5743 Binary files /dev/null and b/public/images/blog/2020-10-02-nlp-translation/task4b_step_shares.png differ diff --git a/public/images/blog/2020-10-02-nlp-translation/task4b_times.png b/public/images/blog/2020-10-02-nlp-translation/task4b_times.png index 0645c97..dbc66cc 100644 Binary files a/public/images/blog/2020-10-02-nlp-translation/task4b_times.png and b/public/images/blog/2020-10-02-nlp-translation/task4b_times.png differ