Video compression algorithms result in a reduction of image quality, because of their lossy approach to reduce the required bandwidth. This affects commercial streaming services such as Netflix, or Amazon Prime Video, but affects also video conferencing and video surveillance systems. In all these cases it is possible to improve the video quality, both for human view and for automatic video analysis, without changing the compression pipeline, through a post-processing that eliminates the visual artefacts created by the compression algorithms. In this presentation we show how deep convolutional neural networks implemented in Python using TensorFlow, Scikit-Learn and Scipy can be used to reduce compression artefacts and reconstruct missing high frequency details that were eliminated by the compression algorithm.
A possible application is to improve video conferencing, or live streaming. Since in these cases there is no original uncompressed video stream available, we report results using no-reference video quality metric showing high naturalness and quality even for efficient networks.
Video compression algorithms used to stream videos are lossy, and when compression rates increase they result in strong degradation of visual quality. We show how deep neural networks can eliminate compression artefacts and restore lost details.