Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency
Vol. 47, No. 9, pp. 1330-1340, Sep. 2022
10.7840/kics.2022.47.9.1330
-
min-max criterion Reinforcement Learning double deep Q-learning prioritized experience replay end-to-end latency fairness
Abstract
Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.
|
Cite this article
[IEEE Style]
J. Kwon, J. Ryu, J. Joung, "Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 9, pp. 1330-1340, 2022. DOI: 10.7840/kics.2022.47.9.1330.
[ACM Style]
Juhyeok Kwon, Jihye Ryu, and Jinoo Joung. 2022. Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency. The Journal of Korean Institute of Communications and Information Sciences, 47, 9, (2022), 1330-1340. DOI: 10.7840/kics.2022.47.9.1330.
[KICS Style]
Juhyeok Kwon, Jihye Ryu, Jinoo Joung, "Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 9, pp. 1330-1340, 9. 2022. (https://doi.org/10.7840/kics.2022.47.9.1330)