Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency 


Vol. 47,  No. 9, pp. 1330-1340, Sep.  2022
10.7840/kics.2022.47.9.1330


PDF
  Abstract

In this paper, a reinforcement learning(RL)-based scheduler to minimize the maximum network end-to-end latency is implemented in a single agent environment and a multi-agent environment. The RL model used the double deep Q-network (DDQN) with the prioritized experience replay (PER). Since the agents are unable to identify end-to-end latencies in the multi-agent environment, the state and reward were obtained using the estimated end-to-end latencies. Four network topologies were implemented and simulated to compare the reinforcement learning-based scheduler, FIFO, round robin (RR), and a simple heuristic algorithm (HA). As a result of simulation in fixed packet generation scenarios, the RL-based scheduler achieved the minimization of maximum end-to-end latency in all the topologies. The FIFO and RR schedulers could not minimize the maximum end-to-end latency in any of the topology, and the HA could not minimize the maximum end-to-end latency in a single topology. In scenarios with random flows generation, the RL-based scheduler performed better than the FIFO and RR, but performed the same as or worse than the HA, depending on the topology.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Related Articles
  Cite this article

[IEEE Style]

J. Kwon, J. Ryu, J. Joung, "Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 9, pp. 1330-1340, 2022. DOI: 10.7840/kics.2022.47.9.1330.

[ACM Style]

Juhyeok Kwon, Jihye Ryu, and Jinoo Joung. 2022. Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency. The Journal of Korean Institute of Communications and Information Sciences, 47, 9, (2022), 1330-1340. DOI: 10.7840/kics.2022.47.9.1330.

[KICS Style]

Juhyeok Kwon, Jihye Ryu, Jinoo Joung, "Network Scheduler Based on Reinforcement Learning for Minimizing the Maximum End-to-End Latency," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 9, pp. 1330-1340, 9. 2022. (https://doi.org/10.7840/kics.2022.47.9.1330)