Federated learning, communicationefficiency, , compression, parameter, server, model update 


Vol. 51,  No. 1, pp. 55-59, Jan.  2026
10.7840/kics.2026.51.1.55


PDF Full-Text
  Abstract

Federated learning suffers from high communication overhead due to frequent transmission of large local model updates. To address this challenge, we propose a trend- aware projection-based compression method that adap- tively selects a compressed update based on directional similarity with the previous update. Simulation results show that the proposed method achieves higher accuracy and lower training loss than other baseline schemes under the same communication cost.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Related Articles
  Cite this article

[IEEE Style]

S. Kwon and S. Park, "Federated learning, communicationefficiency, , compression, parameter, server, model update," The Journal of Korean Institute of Communications and Information Sciences, vol. 51, no. 1, pp. 55-59, 2026. DOI: 10.7840/kics.2026.51.1.55.

[ACM Style]

Sehyeon Kwon and Sangjun Park. 2026. Federated learning, communicationefficiency, , compression, parameter, server, model update. The Journal of Korean Institute of Communications and Information Sciences, 51, 1, (2026), 55-59. DOI: 10.7840/kics.2026.51.1.55.

[KICS Style]

Sehyeon Kwon and Sangjun Park, "Federated learning, communicationefficiency, , compression, parameter, server, model update," The Journal of Korean Institute of Communications and Information Sciences, vol. 51, no. 1, pp. 55-59, 1. 2026. (https://doi.org/10.7840/kics.2026.51.1.55)
Vol. 51, No. 1 Index