Best Papers
 An Audio Declipping Method Based on Deep Neural Networks 


Vol. 47,  No. 9, pp. 1306-1309, Sep.  2022
10.7840/kics.2022.47.9.1306


PDF Full-Text
  Abstract

This paper is about declipping that restores the original sound from a clipped audio signal, and for this purpose, we propose a new method based on a deep neural network. This technique first detects clipping frames based on the number of clipped audio samples. Then, the network is trained using the magnitude spectra of the clipping frame and the original sound frame as input and output of the deep neural network. Through the experiment comparing the RMSE and LSD between the original sound and the reconstructed signal in the speech database, the proposed method showed that the performance was improved compared to the existing method.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Related Articles
  Cite this article

[IEEE Style]

S. U. Choi and S. H. Choi, "An Audio Declipping Method Based on Deep Neural Networks," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 9, pp. 1306-1309, 2022. DOI: 10.7840/kics.2022.47.9.1306.

[ACM Style]

Seung Un Choi and Seung Ho Choi. 2022. An Audio Declipping Method Based on Deep Neural Networks. The Journal of Korean Institute of Communications and Information Sciences, 47, 9, (2022), 1306-1309. DOI: 10.7840/kics.2022.47.9.1306.

[KICS Style]

Seung Un Choi and Seung Ho Choi, "An Audio Declipping Method Based on Deep Neural Networks," The Journal of Korean Institute of Communications and Information Sciences, vol. 47, no. 9, pp. 1306-1309, 9. 2022. (https://doi.org/10.7840/kics.2022.47.9.1306)