We’ve come a long way from flip phones with a single mediocre camera to advanced multi-lens camera systems. Today’s camera in motion hardware combined with powerful software-based algorithms have played a key role in improving video quality compared to just a couple years ago. Now, with the upcoming maturity of 5G and AI, we’re approaching another phase shift in the way video is used in society at large and the role of video stabilization.
What’s a camera in motion?
As opposed to a camera sitting perfectly still while shooting video, when a camera is in motion, the video output can be shaky if it’s not stabilized. The faster and more the camera is moving, the more important video stabilization becomes. Common cameras in motion are smartphones, drones, bodycams and action cameras. But that’s only the tip of the iceberg as we enter a more video-centric era. For instance, various types of wearables are being pioneered that will leverage video for all kinds of new applications. This is especially significant for frontline industrial workers but will be relevant in many sectors.
All of these types of cameras in motion rely heavily on video stabilization to give users the best possible video experience.
Video stabilization as the foundation for new methods of enhancement
There are many other capabilities needed to achieve high video quality besides video stabilization in and of itself. However, in cameras in motion, many of them would be more or less a lost cause without effective video stabilization as a foundation. These video enhancement features include auto zoom and object tracking, field of view and distortion correction, and motion blur reduction. Once video is stabilized effectively in a camera in motion, these additional enhancements are possible, but there are many more enhancements and added-value capabilities around the corner. More widespread use of AI, augmented reality (AR) and virtual reality (VR) are poised to bring new ways of using cameras in motion.
In turn, these new capabilities will be more effective and easier to implement with software-based video stabilization algorithms as a trusty foundation.
Tomorrow’s video stabilization needs for cameras in motion
Video consumption in society has already increased substantially over the past years, from 1.5 hours per day in 2018 to 2.5 hours per day in 2021. This was given a boost during the COVID-19 pandemic given the exponential increase in video conferencing. The use of video is expected to continue to rise going forward at the same time as 5G becomes mainstream, allowing much more rapid video sharing and higher quality video streaming.
To take full advantage of 5G, video compression will need to be even better to be able to share increasingly higher definition video on the fly from a smartphone or stream it from a drone. In turn, as video is used for longer periods of time on devices like smartphones that are not used exclusively for video, this will pose new requirements for battery and thermal performance.
As a result, the video stabilization platforms of tomorrow will need to be open for great customization and optimization to give extra priority to compression, bandwidth, higher definition video, power management, noise management or another factor depending on the use case.
Why we need to reimagine video stabilization quality
To support new cameras in motion, new methods of video enhancement and more flexible optimizations, we need to rethink the way we define video stabilization quality. This has many key implications for everyone from smartphone OEMs and drone manufacturers to video stabilization testing institutes. These include how video stabilization software is built, how video stabilization is tested and what your customers will expect and experience.
Don’t hesitate to get in touch with us and engage in dialogue on your view of present and future video capabilities. For inspiration, insights and best practices for the next generation of video stabilization, enter your email address and subscribe to our newsletter.