Knowledge base is the explanatory section of our webpage. Here you will be able to find a number of explanatory videos to get a solid understanding of the functions of our products. Knowledge base provides a powerful suite of video functions that are key to a wide variety of intelligence, surveillance and reconnaissance applications.
All high performance long-range gimbals today have an onboard video processor and there are many good reasons for that. The onboard processor is a dedicated hardware processor which receives the video stream from the sensors inside the gimbal, processes the video and sends it to the datalinks. Here is a list of the key functions that the processor performs:
There are several additional image processing functions that improve the usability of the gimbal:
The onboard processor is a piece of hardware, a miniature computer. In the newest generations of gimbals the onboard video processor is integrated inside the gimbal and is not noticeable from the outside. The older technology uses image processors that are external to the gimbal and essentially is a separate electronics box.
The new generation gimbals use integrated processor, due to many benefits:
The older generation gimbals with external image processors have the following limitations:
Software stabilization is one of the fundamental functions of the modern miniature gyro-stabilized gimbal.
The principle behind this feature is simple – individual video frames are registered between each other in order to find the misalignment between the frames. Once the misalignment is known, the latest frame is shifted in X and Y direction so that the image in the center remains stable for the observer. This is extremely valuable feature in miniature gimbals which are subjected to high vibrations and rapid accelerations. Under any circumstances such feature will improve the stability of the video from the moving platform. The operator is able to process the stabilized video with considerably reduced workload and concentrate on extracting valuable intelligence out of the video.
The way Software stabilization works is the following:
Not all software stabilization is the same.
The second approach is the cheapest one, since it uses the standard feature of the block camera. However, since the block cameras are intended for CCTV applications, this approach is nearly useless for long range gyro-stabilized gimbal applications. In CCTV applications, when camera is mounted on a pole, the image jitter frequency and amplitude is different from what a gyro-stabilized gimbal experiences in airborne application.
The third approach can improve the viewable image quality, however there is a significant drawback of this method- the video data stream from the aircraft is sent un-stabilized and consumes significant datalink bandwidth, which reduces the radio range of the aircraft. This approach often used by non-gyro stabilized cameras in a small hand-launched unmanned aircraft. In high performance surveillance gimbals this approach is rarely ever used today. Using such approach with HD and FHD video streams would severely affect the radio datalink range performance.
While the software stabilization is indispensable feature, there is also a roll axis that is not stabilized in 2-axis gimbals. Roll disturbance of the video cannot be stabilized by shifting individual frames in X and Y plane. The roll correction is a software stabilization feature that corrects the angular displacement between the individual frames of the image.
The roll stabilization significantly improves the image quality and further reduces the bandwidth of the video. When this feature is combined with classical software stabilization and a capable mechanical stabilization, the video output from the long-range surveillance gimbal gains the stability which is traditionally reserved for much bigger gimbals, more expensive gimbals as well as more stable aircraft platforms.
At some point of the payload selection process you will inevitably ask yourself – how to quantitatively compare the performance between various gimbals? Which parameters should I compare? It is very likely that you will open gimbal payload datasheets and select parameters that are printed in just about every datasheet. You will look for similar parameters and put them into a comparison spreadsheet. Then you will try to compare the values and justify which gimbal has the best performance.
This approach seems like a reasonable engineering approach. Unfortunately, this is totally not applicable for miniature stabilized payloads. There are several simple reasons for that and I will try to explain them below.
Most of the parameters that you will compare from datasheets are not coupled to stabilized image quality. For example- you will be biased to compare the angular speed and encoder resolution since it is intuitive to think – the higher the speed, the better the stabilization. Or, the higher the resolution of the encoder, the more precise the stabilization. This is not the case and as a potential user of the system you should not be concerned about the encoder resolution or the maximum angular speed of the gimbal or any other internal parameter that the manufacturer can state.
This will get us to the next point – which parameter should at in order to compare the quality of the stabilized image? There should definitely be a parameter that actually shows the image stabilization performance, right? This is true, there is a parameter that is intended to quantify the stabilization performance of the gimbal. It is called stabilization level, LOS stabilization performance or sometimes just stability and is given in micro radians. This is the parameter that does show the image stabilization performance of the gimbal. However this parameter is widely misused between various small gyro-stabilized payload manufacturers. Sometimes it is wrongly interchanged with encoder resolution or accuracy. Sometimes is called as pointing accuracy. Sometimes it is guesstimated by the manufacturer based on competitor’s stated values. Long story short – there is a complete chaos going on with the stability performance value stated by various small gimbal manufacturers. The explanation of this surprising fact turns out to be very simple:
There is no standardized procedure used by small payload manufacturers that governs how this parameter is measured, plus such a procedure would require highly specialized and expensive automated test rigs, that are usually not accessible for small payload manufacturers.
Under these circumstances there is a single valuable advice that we can give – DO NOT trust stabilization performance values during your selection journey. If you do, it can easily can turn out that the worst performing gimbal will have the best stabilization parameter on paper, just because the engineers mixed it with some other parameter that looks similar...
Compare the actual quality of the video under the similar conditions. You need to be clear that the videos should be filmed at similar settings in order to give objective comparison. We recommend trying to find videos that are filmed at nearly identical Field-of-View and installed in similar aircraft platforms.
Below is a deeper guidance on this subject:
Field of View of the camera. The smaller FoV, the more apparent the stabilization limitation becomes. Video at 5 degrees FOV will look twice as nice compared to the video at 2.5 degrees FoV , which will in turn look twice as nice than video at 1.25 degrees FoV. If you compare surveillance gimbal, try to compare videos shot at ~2.5 degree FoV- most of the high performance gimbals will have the optical zoom that will go that far.
The platform at which the gimbal was installed during the video filming plays a role in video quality. The manned aircraft has considerably lower vibration levels than small UAVs. If you shoot from a small general aviation aircraft like Cessna, vibration level will be lower than for example a UAV VTOL or a small gas powered UAV with a single cylinder engine. In manned aircraft it is also possible to install large and heavy vibration isolation system that would be impossible to replicate in weight and volume sensitive UAV.
Ability to provide good video under high vibrations is actually connected to gimbal design and as well as passive isolators that are supplied with the gimbal. A precisely balanced gimbal will generate acceptable video even at high vibrations.
Is the video filmed with a fixed wing or a rotary platform? In case the gimbal is installed on a helicopter or VTOL and the vehicle is hovering, there will be no movement between the platform and the object, so the region of interest will not move away from the video frame. Such video will not reveal the capability of the gimbal to point in the area of interest if the platform is moving. The latter is considerably more challenging and this is where features such target tracking, scene steering and geo pointing are coming to play. All of these features work to keep the region of interest in the center of the video frame irrespective of the platform movement.
Remember that in fixed wing application, the aircraft moves all of the time, so it is a challenge for the gimbal to remain locked on an object of interest, especially at Narrow FoV, such as 2 degrees. Therefore in fixed wing surveillance applications it is critical to have a video tracker, which will lock to target and maintain the target in the video frame irrespective of the aircraft flying direction.
Make sure that the image is taken from the gimbal that you are reviewing. Sometimes manufacturers tend to publish videos of the larger gimbals (and considerably more expensive) from their product range.
We have seen cases where manufacturers have a range of gimbals with good specs, however there is no or very small amount of videos that actually show how this gimbal performs in real life. Here is a list of red flags that will simplify the selection of the right supplier:
All competitive surveillance gimbals that are manufactured in USA fall under Integrations Traffic in Arms Regulations or ITAR. If you intend to use the gimbal in USA only and never resell, export or use your products in other countries – this is not a problem. If you do intend to use or sell internationally, this becomes a major problem. No matter what the seller of the product will tell you – the process takes unpredicted amount of time and needs to be repeated every time you want to export the product to another country. From our own experience – the ITAR export license can easily take 7+ months to process and there is no way how this can be reduced. Worst of all, you never know how much time it will actually take. Our advice – avoid ITAR regulated surveillance gimbals if you want to stay competitive in the business.
If the product is manufactured in other counties, such as a European Union country , it does not fall under ITAR, but falls under local export regulations, which are usually much less restrictive than ITAR. For example, in any of the European Union countries the export procedure is standardized and for a surveillance gimbal takes less than 30 days. In EU there is also a list of countries where the export license is not required at all. Export to the following countries do not require and export license: Australia, Canada, Japan, New Zealand, Norway, Switzerland and United States of America. This is a great advantage and will positively influence your competitiveness, costs, lead times and flexibility.
If you require the surveillance camera to work during night, then you need a gimbal with the infrared (IR) sensor. In this case you need to be aware that IR cameras are usually export controlled. If the IR sensor is produced in USA, then it will be ITAR controlled – as we already mentioned that we would avoid anything that is ITAR controlled if you would like to stay competitive in the business. However there are some exceptions to this – low performance IR sensors with low resolution and/or slow frame rate (9 Hz usually) have exemptions and do not require and export license for shipping outside the U.S to most of the countries. This is very nice, however there is very limited applicability of such low performance IR sensors for long-range surveillance gimbals. There are several good explanation videos available on youtube.com regarding 9 Hz and 30 Hz IR camera comparison.
The latest generation of long-range gimbals perform the video encoding onboard and stream the video over IP. The main benefits of IP video are:
Usually video is encoded in H.264 standard (MPEG2 TS) or MPEG4. The first one is considered the industry standard.
Older generation gimbal or less advanced gimbals have analog composite video output. Some users may decide to purchase such gimbal and later find out that they would like to use IP datalink, so they need to encode the video onboard the aircraft.
It is possible to use a 3rd party hardware to perform the encoding. Several approaches are possible – using video encoder hardware intended for CCTV cameras or using dedicated hardware that is designed for UAVs. For both of this approaches be prepared for an adventure where you will find all kind of unexpected surprises.
The takeaway from this story is – invest into gimbals that already have an integrated video processor with IP output and the system is validated to be operational.
Many of the miniature surveillance gimbal manufacturers put a tag that the gimbal has an HD or even FHD sensor. Users understand that the bigger the resolution the better and demand a highest resolution sensors possible. The question is – what actually this HD mean? And even bigger question the user needs to ask himself – how are you going to transmit the HD or Full HD (FHD) resolution to the ground?
We see two fundamental groups of gimbals on the market. Both of them have a tag HD on them, however only one actually makes sense for long-range surveillance applications.
It is not a rocket science to mechanically put an HD or FHD sensor inside any gimbal. Now, important point is what video interface that you are going to get out of the gimbal? In case the gimbal does not have an integrated image processor, then the only interface that the gimbal manufacturers can give you out of the gimbal is what he gets from the sensor itself. Usually it is an analog component video. Sometimes it is digital LVDS. In any case – this is absolutely useless for you as a User. You will not find a video transmitter or datalink that will get this video transmitted to the ground. What you will end up with is that you are likely to use the old composite video output from the gimbal and send a standard definition video to the ground.
Long story short- avoid any gimbal that does give you an HD video in a way that you cannot use. The new generation video transition is done over IP datalinks, so you need the gimbal to output the IP stream. There are many IP datalinks available today and it is the only reasonable way to transmit HD or FHD video over any distance that is suitable for long-range surveillance applications.
The second group of gimbals are the latest-generation of miniature gimbals. The manufacturer actually did their homework and delivers the gimbal which provides the video in a way that makes sense for long-range surveillance. You can actually transmit the video to the ground using am IP datalink that you can purchase. The video is provided in an IP stream, usually encoded with H.264 encoder (MPEG2), often user can select MPEG4 as well. User can also select the compression frame rate to meet bandwidth requirements of the datalinks.
Our advice – if you would like to have HD or FHD video, make sure that the gimbal outputs IP stream.
This is a topic for a separate discussion, however we will give a short and concentrated explanation how this is done in the industry.
There are two methods that are used today. One method is to transmit video over analog transmitter and the second method is to transmit video of IP datalink. The analog transmission is low-cost and widely used in hobbies such as FPV. A standard definition video can be transmitted over analog transmitters. Analog video will pick up interference and will appear noisy most of the times. This is considered an old technology and it is rapidly being replaced with IP datalinks in professional applications. IP datalinks are similar to Wi-Fi and can transmit IP data including IP video streams as well as serial data streams. IP datalinks are bidirectional and the data can be streamed both ways. For unmanned aircraft applications this means that it is possible to have a single IP datalink onboard the unmanned aircraft- the same datalink will transmit command and control data for the autopilot, command and control data for the payload as well as IP video. This is a simple and elegant approach that is used latest generation of professional long endurance and long range unmanned aircraft. IP datalinks can transmit HD and FHD video. Since data is transmitted digitally, the viewable video does not have noise.
Here is a short comparison of how the unmanned system looks with Analog transmitter vs IP datalink:
If you require to wirelessly transmit video in HD or FHD we recommend to invest in IP datalink solution as early as possible. Sometimes gimbal manufacturers can offer IP video datalinks for their gimbals as part of the package. This is a great benefit, since the solution is validated and no experimentation is needed from User side.
If you require a significant radio transmission range, for example 100+ km and a gimbal manufacturer can supply a complete and validated plug and play tracking antenna system together with the IP datalinks, you should seriously consider this option. It is likely that you will save yourself a lot of pain, reduce your costs and get a better results without wasting valuable time and resources into purchasing the equipment that does not work together. We know many companies that learned this the hard way…
The two-axis miniature gyro stabilized gimbals can be divided in two generic types – direct drive or indirect drive. The direct drive is where the electrical motor is placed directly on the shaft of the stabilized axis. The indirect drive is any design that utilizes pulleys, gears, belts, strings to transmit the rotational movement from the motor to the payload.
Nearly all latest generation long-range miniature payloads utilize direct drive technology. Older designs use many types of indirect drive methods, but the industry is steady moving to direct drive type of gimbals.
Some low-cost designs use modified servo mechanisms from RC hobby- such designs are reserved only to low performance hobby applications and are not used for professional surveillance.
The object tracking is another fundamental function for a surveillance gimbal. The processing is done inside the gimbal and video processor automatically steers the pan and elevation axis of the gimbal so that the object of interest stays in the middle of the video frame. The operator can designate a moving or static target and the gimbal will lock into this target. The target will remain locked even if the aircraft platform is moving or loitering. The target will also remain locked if the object is moving relative to the platform, for example if the moving vehicle is tracked on a highway. Operator can zoom in to the narrowest FoV, while the gimbal will remain locked on the target.
For long range surveillance applications the object tracking function is critical. Without object tracking, the operator would not be able to zoom into the target and keep the target in the video frame.
Let’s imagine that your aircraft is equipped with a capable gyro stabilized camera system. The image is stable, the operator can zoom in to the region of interest and the gimbal will automatically keep the region of interest inside the picture frame as the aircraft loiters at significant standoff distance.
As soon as the operator zooms in to the narrow field of view, he gains the ability to see the details and assess the situation inside the image frame. At the same time the operator loses the ability to see the entire area of interest. This is called as ‘soda straw’ effect since the operator is seeing a small fraction of the entire picture as if he would be looking through a soda straw.
In surveillance missions the operator is often tasked to detect the moving objects, then recognize and identify them. Usually this would require to scan the area at narrow field of view – this a tedious job and a human cannot perform this effectively for prolonged time.
A much better approach would be to look at significant area at wide field of view – for example the entire village. An integrated image processing computer would constantly analyze the image and search for groups of pixels that are moving relative to the background. As soon as the movement is detected, the visual box is overlaid over the video stream. In the situation, where there are many objects moving, all of them would be indicated with a visual overlays. With this type of information the operator is able to concentrate on the activities inside the large area of interest and, if necessary, zoom in to get more information on the object. Operator can instantaneously see all the moving objects, identify the direction of movement, the amount of objects and take the necessary actions. Since the detection part is handled by the software, there is little risk of something that would remain undetected during the mission.
Such tool exist and is called a Moving Target Indicator or MTI. We define two levels of Moving Target Indicator functionality. One level is a ‘Large object Moving Target Indicator’ other type is ‘Small object Moving Target Indicator’. There is a functional difference between both, in short:
Large object MTI- will indicate objects that can be detected by a human operator close to 100% of time.
Small object MTI- will indicate objects that are difficult for a human operator to detect. This includes small and slow moving targets. In other words, a human operator would not detect most of the targets that a small object MTI would detect.
There is a completely different functionality behind these features:
The large object MTI helps the operator to designate the moving object in order to start the automatic tracking. Operator does not have to click on the screen while trying to catch the moving object of interest, rather he confirms which target he would like to track with a push of a single button. Camera then slews automatically to the target and continues tracking the object. Later he can toggle between various moving targets. Since it is often tricky to manually click on the moving target, this function greatly simplifies the process of initiating a track.
The small object MTI on the other hand provides the next level of informational awareness to the operator. This feature automatically extracts valuable intelligence out of the video stream and does it in real time during the mission.
The small Object MTI requires a significant processing power and is considerably more difficult to implement. This technology is considered as state of the art today and is not widely used in the industry yet, however it is clear that this is the future of surveillance payloads.
Our advice when you select your next surveillance payload:
How it works? The operator marks the impact point with a simple click of a button or joystick. The software automatically calculates deviations in reference to the gun target line. The features supports several artillery batteries, as well as mean point of impact (MPI) calculations if necessary. The operator points the crosshairs on the impact point and marks a shot which is added to the defined target. The software calculates the deviation between the target location and shot position (or MPI).
The artillery, target and shot positions can be entered in decimal degrees or in MGRS. The operator reports GT line (gun target line) corrections in meters.
The Geo-Location feature enables the following options:
GPS coordinates (longitude, latitude) of the target and slant range from the target to the camera are indicated on the screen at all times.
Pointing the gimbal at the chosen GPS coordinate.
Supports SRTM elevation data.
North position is shown in the main camera window at all times.
Moving Map Software plugin enables additional functionality (in additional software window):
Pointing at the selected point on the map.
Shows target position and Field of View area on the map corresponding to the video feed from the camera.
Ability to input list of predefined targets allows the gimbal to point at any of them when necessary.
Supports online maps when GCS has internet connection.
Supports uploaded raster maps when internet connection is not available.