A Survey on Parallel Deep Learning 


Vol. 46,  No. 10, pp. 1604-1617, Oct.  2021
10.7840/kics.2021.46.10.1604


PDF
  Abstract

Deep learning has been widely used in various fields, especially leading drastic development in the state-of-the-art technologies such as natural language processing, speech recognition, image classification, feature extraction, or machine translation. As the massive data and intricate tasks necessitate enlarged neural networks, the number of layers and parameters in neural networks become tremendous, resulting in great performance of compute-intensive technologies. To make large-scale deep neural networks(DNNs) scalable over resource-constrained devices and accelerate learning, some parallelization approaches have investigated under the name of federated learning. In this survey, we introduce four parallelism methods: data parallelism, model parallelism, hybrid parallelism, and pipeline parallelism.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

J. Yoon, J. Lee, N. Han, H. Lee, "A Survey on Parallel Deep Learning," The Journal of Korean Institute of Communications and Information Sciences, vol. 46, no. 10, pp. 1604-1617, 2021. DOI: 10.7840/kics.2021.46.10.1604.

[ACM Style]

JinYi Yoon, JiHo Lee, Nayoung Han, and HyungJune Lee. 2021. A Survey on Parallel Deep Learning. The Journal of Korean Institute of Communications and Information Sciences, 46, 10, (2021), 1604-1617. DOI: 10.7840/kics.2021.46.10.1604.

[KICS Style]

JinYi Yoon, JiHo Lee, Nayoung Han, HyungJune Lee, "A Survey on Parallel Deep Learning," The Journal of Korean Institute of Communications and Information Sciences, vol. 46, no. 10, pp. 1604-1617, 10. 2021. (https://doi.org/10.7840/kics.2021.46.10.1604)