When it comes to video enhancement software, it’s not practical to have to reinvent the wheel for every single use case. There are so many different combinations of hardware and software that could potentially be used in drones, smartphones, wearable cameras and other cameras in motion. This is why it pays off to look for video enhancement software that is general enough to be used in a wide variety of configurations. Let’s explore the great potential for new video enhancement applications in drones and more details on both why and how video enhancement software needs to adapt to specific use cases.
Taking inspiration from the past and looking to the future
Writing software is hard in general. Writing a high-performance video enhancement SDK is very hard. And taking the time to make sure your code doesn’t just solve one problem on one particular set of hardware takes even more effort. We knew early on that our video enhancement software Vidhance could be used in a variety of situations, so making our software general and customizable was always a priority.
At a high level, Vidhance is a system that takes video input, processes it, and outputs either a modified frame (e.g. stabilization), metadata about the frame (e.g. location of a tracked object), or both. This is a generic description, and for a good reason. Our work neither started, nor ends, with where we are now.
Initially, we made video enhancement software for the defense industry. Video from unmanned aerial vehicles (UAVs) underwater or in the air quickly needed the best possible enhancements, such as contrast optimization and stabilization. The technology was later expanded to other UAV applications including smaller drones and solutions for air traffic control. So, drones were an important part of how we started out.
Then smartphone manufacturers increasingly wanted to deploy video stabilization software to improve real-time video feeds. After a certain number of years focusing primarily on smartphones, we’re now coming full circle back to drones. This is because we are constantly looking towards the future and the next big thing.
The future potential of video enhancement for drones
Looking towards the future doesn’t just mean keeping an eye out for what to do next. It also means planning ahead for the software you’re writing now, instead of only making easy gains for current, specific problems.
For example, drones are one of the hottest products in technology today. Their future seems bright both as consumer products and in other expanding commercial applications. Vision processing-enabled capabilities required include collision avoidance, broader autonomous navigation, terrain analysis and subject tracking. Collision avoidance is not only relevant for fully autonomous navigation but also for “copilot” assistance when the drone is primarily controlled by a human being, analogous to today’s driver-assisted systems in cars.
These key features are poised to expand the drone market of tomorrow by making drones more capable and easier to use. The algorithms that will be used, whether they exist today or will be researched in the years to come, are typically applicable to a wide range of problems. But a general problem domain isn’t enough. The implementation – the software itself – must also be general to be able to unlock the future potential use cases for drones and similar cameras in motion.
Why video enhancement software needs to stay general
Video stabilization in specific and video enhancement in general are both challenged to maintain generality. Deploying them on large devices like surveillance aircraft in the defense industry is quite different from small devices like smartphones, bodycams and small drones. Integrating video stabilization on budget hardware is also vastly different from top-quality hardware.
Vidhance has been used in an app for Kontigo Care’s advanced eHealth platform Previct® to detect the effect of drug-inducing stimulates on the iris of the eye. Although this is a highly specific use case, video enhancement software with a broad feature set and SDK makes it easy to add features like object tracking, live auto zoom and video stabilization if they are needed or to solve new problems.
While algorithms may stay the same (with small tweaks) for a long time, hardware and sensor data tend to look and work very differently across different devices. Staying general doesn’t just mean being open to different problems and use cases, it means decreasing the time required to integrate video enhancement into completely new hardware. For example, something we do a lot of is shuffle a large number of pixels around and manipulate them. Depending on the hardware, this is done in very different ways. A 4K video frame contains over 8 million pixels – that’s a lot of data.
How video enhancement software adapts to different hardware configurations
The word hardcoding is frequently used in software development contexts to denote something that is written specifically for exactly one configuration. This is generally frowned upon. A simple CPU will have to modify them one by one, which can take a lot of time. The easy way out is to hardcode this option into the software, since it will always work, there is always a CPU. But it’s also a very slow option.
Most CPUs have multiple cores, meaning they can carry out several instructions at once, and then you can divide the big chunk of data into smaller subsets for each core that can be processed simultaneously, considerably reducing the time required. Some devices have graphics cards and some have even more specialized hardware, like FPGAs or DSPs. All of these can be leveraged to improve performance.
For both smartphones and drones, the cost, performance and power consumption of different subsystems are taken into account when designing a product. Size and weight are especially important for drones. Different technologies deliver different tradeoffs, and video enhancement software needs to be capable of adjusting to this easily, preferably even auto adjusting.
Figure 1: Vidhance works for general input video, is configured with the right toolset for the current hardware, and outputs its result.
The best way to process the data also depends on other factors, such as if there’s other software running at the same time also in need system resources. In short, setting up a render pipeline in a smart way takes more time than a hardcoded solution (in the short run), but in the long run it allows you to create a more efficient and scalable platform. This empowers integrators to balance performance vs. computational cost as they see fit.
Time waits for nobody
Keeping video enhancement software general also means using modern programming techniques and languages to allow to the use of complex features while still remaining compatible with older hardware. Let’s take a look at a small example of what this may entail. How about getting the current (precise) time and date? That’s simple enough, right? Well, yes, this is done quite easily in both Linux (the base of Android and most IoT systems) and Windows. On Linux, this can be done with a few lines of programming code:
result = asctime(localtime(tm&));
It’s not too bad in Windows either. Unfortunately, Windows reports the time and date in another format! Since the output (or internal workings) of video enhancement software should work exactly the same regardless of the operating system, you need to be proactive even with small details like these. Therefore, when building our Vidhance video enhancement software for a Windows platform, our build system swaps out the above code for something like this:
There’s no need to understand the details here. The main takeaway is that even a small detail like this requires effort. When building the video enhancement software for a customer, information about the target system is provided along with specific customer requests, and the correct pieces of code are automatically selected and compiled, here and in a thousand other places, all ensuring the best possible performance and compatibility.
More to implementation than meets the eye
A typical implementation project often requires more than just installing a packaged product. There’s normally more work needed for integration and fine-tuned algorithms. Clients may request everything from customized products to pre-testing or characterization evaluation in our DxO lab (read more about our testing process and lab). The results are distilled down to a few important key variables. These are inputted into Vidhance, which automatically adjusts to make the best video enhancements and analysis possible.
A “semi-automatic” process centered on maintaining generality as outlined in this post takes some effort in the short run but certainly pays off when scaling up in the long run. Ultimately, it’s easier to adapt the software to different devices, hardware configurations and client needs while making it easy to add additional features and serve more clients. And this is just barely scratching the service of what is possible.
A customizable platform for your current and future needs
If you purchase some hardware or software only for video stabilization and then later need to integrate something new to enable object tracking or other features, this could be an expensive and time-consuming dilemma. A general, customizable video enhancement platform that can be integrated into different devices and configurations is the answer. This will also allow you to easily add and upgrade performance and features over time as needed, which is key to future-proofing your video quality. As a result, you’ll be poised to realize the enormous potential for new vision processing advancements in drones and similar cameras in motion.
Contact us to learn more about all the possibilities of Vidhance. We’d be happy to discuss your needs and book a demo. For inspiration, insights and best practices for the next generation of video enhancement, enter your email address below and subscribe to our newsletter.