Video compression and video stabilization – a winning combo

Video compression plays an important role in making it more efficient, affordable and feasible to both store and stream video feeds. In turn, video stabilization is a perfect complement to video compression that helps make it significantly more effective. We’ll reveal the inner workings of video compression and how video stabilization can make a tangible difference in solving video compression challenges with real-life examples. The examples in this blog post will draw on our own video enhancement software platform Vidhance.

compression_n

Why video compression is important

High-quality video files, even short clips, consume large amounts of memory. A single second of raw 4K-resolution video amounts to hundreds of megabytes!

A video is essentially a series of images, normally around 25-30 for each second of video. Each image is called a frame. Storing or transmitting a video file really entails storing or transmitting a potentially very large set of images/frames:  Naturally, we want to minimize the time and cost of this action, which is why we compress videos: We sacrifice a little quality for much smaller file sizes.

How video compression works

Video is compressed using a video encoder for a specific video format, and its corresponding decoder is used to create human-viewable video for playback. Both steps must be performed effectively and quickly to ensure live recording and live playback.

Figure 1: The camera delivers raw video frames via Vidhance to an encoder, which compresses the video before storing it. The compressed video must then be decoded before it can be viewed.

The individual images can be compressed by storing them in formats like JPEG and PNG. This is a simple approach, but it is not preferable. Video compression usually works by computing and compressing only the differences between frames, called the delta. This is preferable to storing each individual frame in its entirety: If a frame is similar to the previous one, as is often the case in videos, describing how it changed involves less data than describing the full frame.

Figure 2: The red box moves from one corner to another. If we express this difference as the change in each pixel, we end up with a big delta (making all the top-left pixels yellow, and making all the bottom-right pixels red). Good compression needs something better.

Given that a digital image consists of pixels, the delta is an accumulation of differences in corresponding pixels. The delta between two pixels can be computed using the difference between the colors of those pixels. A frame delta consists of all its pixel deltas.

We can visualize this delta by drawing it as its own image. If a pixel has the exact same color in both images, that pixel will be black, since the delta is 0 and black color is represented as (0.0.0) in the RGB color space. The greater the difference between corresponding pixels, the brighter their delta pixel will be. The greater the difference between the two frames, the more color and details will be seen in the delta.

Why video stabilization improves video compression

Figure 3: Center: One frame of a shaky video. Left: Delta (changes of each pixel) from the previous frame without stabilizer. Right: Delta with stabilizer, much smaller and easier to compress.

The image above shows one frame of an example video in the center. To the left is the delta relating to the next frame without any stabilization, and to the right with Vidhance video stabilization. Not a lot is happening in the video, so the delta should be mostly black. But note how the outlines of the girl’s hair can be seen as purple lines. This is because adding purple (255.0.255) to what was previously green (0.255.0) results in new white (255.255.255) pixels.

As the video was a bit shaky, there are unwanted camera movements resulting in an unnecessarily bright delta, which makes video compression harder and the video file larger. Compare this with the steadier video on the right, which does not require as much data, making the file smaller.

A video’s bitrate determines how much data describes each frame and it may be either constant or variable. A higher value enables better quality but also means the file will be larger. When encoding video at a constant bitrate, the quality is limited by the amount of data allowed for each frame, so quality may be sacrificed to keep each delta size below the limit.

With a variable bitrate, a quality setting determines how much (or little) detail may be sacrificed on each frame and the file size is secondary. There is also lossless video compression – smaller file size without any loss of quality – but it only goes so far compared to the far more common lossy versions.

 

How video stabilization drives storage efficiencies

Video recorded using a camera in motion, such as from a drone, smartphone or bodycam, naturally tends to be shakier. What if, because of camera movement, all pixels moved down one step? By just comparing pixel to pixel, the delta will be large, but it’s very easy to describe what happened in a smarter way.

Encoders try to take advantage of this by describing not only pixel deltas but also how blocks of pixels have moved in relation to each other. Most modern video encoders (one example is H.264) have this form of motion compensation built-in. But it’s designed to be as truthful to the original raw video as possible and does not make the video any more stable.

With video stabilization, small and unnecessary changes from frame to frame that otherwise occur in recorded video are minimized, effectively making each frame in the video more similar to the one before it. This allows video compression algorithms to more easily minimize the file size and bandwidth required to store or transfer video files.

In turn, this improves the communication with, and storage of, video in devices with limited storage space such as drones, bodycams and security cameras. At the same time, video quality is dramatically improved in live scenarios such as bodycams feeds used for live guidance from command because less bandwidth is required to stream the video.

Applying video stabilization software before compressing a video can dramatically reduce the file size for medium-quality and low-quality video. This enables higher possible video quality and less bandwidth usage, ultimately increasing network performance. Our initial studies showed file size reduction typically in the range of 5% to 20% for standard encoding qualities. As a rule, the lower the quality setting, the greater the benefit of stabilization.

The best of both worlds with Vidhance

All in all, video stabilization software such as Vidhance not only produces a more watchable video but also yields better video quality for a smaller file size and smaller bandwidth requirements. The key to this is processing output directly from the camera before any compression takes place (not on normal video files, but on the massive, raw data streams mentioned in the first paragraph).

Video enhancement software is challenged to do this with a minimal footprint on performance and energy consumption. If compression were performed first, the video would have to be compressed again, and significant video quality would be lost. Object tracking features also benefit from getting the raw video without the subsequent quality loss. This is precisely how Vidhance delivers the best of both worlds in terms of video quality and storage efficiencies.

Contact us to book a demo and see for yourself how your videos can be made better to watch, cheaper to store and easier to stream. For inspiration, insights and best practices for the next generation of video enhancement, enter your email address below and subscribe to our newsletter. 

Vidhance

We’re all about video enhancement

Book a Demo