Fast Motion Estimation Using Multiple Reference Pictures In H.264/Avc 


Vol. 32,  No. 5, pp. 536-541, May  2007


PDF
  Abstract

In video coding standard H.264/AVC, motion estimation using multiple reference pictures improves compression efficiency but the efficiency depends upon image content not the number of reference pictures. So, the motion estimation includes a large amount of computation of no worth according to image. This paper proposes fast motion estimation algorithm that removes worthless computation in the motion estimation using multiple reference pictures. The proposed algorithm classifies a block into valid and invalid blocks for the multiple reference pictures and removes the worthless computation by applying a single reference picture to the invalid block. To estimate the proposed algorithm’s performance, image quality, bit rate, and motion estimation time are compared with ones of the conventional algorithm in the reference software JM 9.5. The simulation results show that the proposed algorithm can considerably save about 38.67% the averaged motion estimation time while keeping the image quality and the bit rate, whose are average values are ?0.02㏈ and ?0.77% respectively, as good as the conventional algorithm.

  Statistics
Cumulative Counts from November, 2022
Multiple requests among the same browser session are counted as one view. If you mouse over a chart, the values of data points will be shown.


  Cite this article

[IEEE Style]

S. Kim and J. Oh, "Fast Motion Estimation Using Multiple Reference Pictures In H.264/Avc," The Journal of Korean Institute of Communications and Information Sciences, vol. 32, no. 5, pp. 536-541, 2007. DOI: .

[ACM Style]

Seong-hee Kim and Jeong-su Oh. 2007. Fast Motion Estimation Using Multiple Reference Pictures In H.264/Avc. The Journal of Korean Institute of Communications and Information Sciences, 32, 5, (2007), 536-541. DOI: .

[KICS Style]

Seong-hee Kim and Jeong-su Oh, "Fast Motion Estimation Using Multiple Reference Pictures In H.264/Avc," The Journal of Korean Institute of Communications and Information Sciences, vol. 32, no. 5, pp. 536-541, 5. 2007.