Improvements in individual sensor technologies
With the proliferation of night vision technology, its mere possession no longer guarantees any combatant an advantage. Technology, doctrine, tactics and training must fuse into an overall capability that enables forces to create and maintain an edge over its opponents by day, by night, in the transitional periods around dawn and dusk, in all weathers and in all environments in which fighting is likely to take place, including cities. The number and variety of technologies involved are immense and the number of individual pieces of equipment required also quickly becomes enormous, especially for larger forces. Therefore, prioritising acquisition and upgrade programmes and building synergistic effects with the money available to armed forces soon becomes a tremdnously complex task.
The core technologies that penetrate darkness and obscurants primarily – but not exclusively – operate in regions of the electromagnetic spectrum that nature has not equipped humans to exploit using the sensors with which we are born. These extend from the radio frequency (RF) segment, which is the domain of radar, through the long-wave, medium-wave, short-wave and near portions of the infrared spectrum, into the visible, in which image intensifiers amplify any available visible light, a progression of increasing frequency and reducing wavelength. All these wavebands have their strengths and weaknesses from a military point of view, which MILITARY TECHNOLOGY previously covered in issue #6, 2016.
Converting this electromagnetic energy into images and information that humans can interpret and use to gain understanding of the environments and the tactical situations in which they find themselves, as well as bringing the ability to act on it more quickly than their opponents can, is the essence of the advantage the best equipment can provide.
Obviously, we humans are very visual creatures whose primary sensors, our eyes, don’t work very well in the dark. We have a primal fear of being attacked when we cannot see, stemming from the long evolutionary period in which we were not always at the top of the food chain. Therefore, in the millenia before cheap artificial light, most people preferred to be asleep during the hours of darkness, to the point at which anyone who was active at night was assumed either to be up to no good or guarding against those who might be.
One time-honoured and still relevant way of addressing this issue is to provide supplementary illumination. Handheld visible lighting, for example, whether a flaming torch or one of the latest high-powered LED devices, solves the problem of seeing where you are going, reducing the risk of tripping over and bumping into things while also enabling inspection of nearby dark corners in which threats might lurk. However, torches are active emitters, whose light can be seen at much greater distances than those at which they enable the user to see.
It is seeing without being seen that provides the big advantage. It is the improvements in performance and reductions in cost, weight and bulk of image intensification and thermal imaging equipment over the last three decades that have made this a reality, extending capabilities from large, costly platforms to the individual infantry soldier; even, importantly, into the law enforcement, security, transport and leisure markets.
Controlling the Spread of Technology
With the general public able to order high quality night vision equipment online, it is clear that the proliferation genie is out of the bottle; insurgents, terrorists and criminals can obtain it as easily as legitimate users. Military forces, therefore, seek to maintain their edge over both asymmetric and peer rivals by reserving the latest technologies and the best equipment for themselves.
This can mean prohibiting the export and non-military sale of equipment containing cameras with characteristics greater than some set value. In image intensifiers, for example, the measure is known as the Figure of Merit (FoM) and is calculated by multiplying the resolution, which is expressed as line pairs per millimetre, by the signal-to-noise ratio. Resolution is measured on test cards on which groups of horizontal and vertical lines are printed at different sizes; the closer together the pairs of lines can be while still being distinguishable the higher the resolution.
The US Department of State’s standard release parameters for image intensified night vision devices to US municipal police, state or provincial law enforcement agencies are FoMs not exceeding 1,400 for ground use and 1,600 for aviation use. They do, however, consider requests for higher-performance equipment on a case-by-case basis, according to the DoD Transfer of Night Vision Devices (NVDs) Handbook.
Image intensifiers also produce undesirable ‘halos’ around point light sources, and manufacturers seek to make these as small as possible, thereby providing the best possible image. The State Department also uses halo size as a limiting parameter for transfer of NVDs, that for the case above being a minimum of 0.85 milimetres.
A third characteristic used to govern transfer of image intensifiers is whether or not the power supply is autogated. Autogating involves switching the power supply on and off very rapidly and is a means of minimising degradation of the image through blooming, which occurs when the tube is overloaded by a bright light source.
Exportability from the US is governed by the International Trade in Arms Regulations (ITAR) administered by the US Department of State, and the Export Administration Regulations (EAR) administered by the US Department of Commerce. An FoM of 1,600 has been widely circulated as the limit under the guidelines set out in the US Night Vision Export Policy Implementation Guidance document, which applies to man-portable NVDs, but this is a confidential document and the actual numbers are not released.
One key parameter for thermal imaging sensors is the Noise Equivalent Differential Temperature (NEDT), which approximately represents the minimum temperature difference that the camera can resolve, expressed in milliKelvins (mK), according to FLIR Systems. For export regulation purposes, however, other parameters – such as frame, rate, detector chip size (measured in the number of elements,) and whether or not the camera is cooled – are used as controls.
These regulations provide opportunities for non-US manufacturers to offer high-performance equipment to customers unable to obtain it from the US, although European, Israeli and Turkish producers are constrained by their own governments’ export rules, which in turn opens up other markets to Russia and China.
More Than Just Sensors
However, export restrictions alone cannot guarantee that hostile forces will not obtain improved night vision sensors and related equipment. Leading armed forces, therefore, increasingly seek ways to obtain more value from the sensors they already have. Characteristics that can be enhanced include ubiquity, persistence, wide area coverage, advanced image processing, multi-sensor systems with fusion capabilities, networking and advanced display technologies including augmented reality; in addition the performance of individual sensors can be improved. These approaches and their integration promise to produce results much greater than the sum of their parts.
Ubiquity involves fielding more sensors in the battlespace on airborne, ground and maritime platforms, providing individual personnel with handheld sensors and teams with portable equipment, on infrastructure such as buildings and purpose-built towers, and with unattended ground sensors. Persistence indicates the assurance that areas of interest are under constant surveillance, something that is well served by the same means, with additional emphasis on long-endurance airborne platorms such as UAVs and aerostats. These also support Wide Area Motion Imagery (WAMI) sensors and radars, both of which overcome the ‘soda straw’ limitation associated with the fields of view of typical electro-optical sensors. WAMI systems have proved their worth on manned aircraft, UAVs and aerostats, while approximate analogues are emerging for use on the ground, including by dismounted troops.
The side that can achieve persistent ISR over large areas of interest, particularly at night. stands to gain a clear advantage in combat through much greater situational understanding. This can counteract the advantages enjoyed by terrorists and insurgents who, naturally, know when and where they want to strike and often seek the cover of darkness for their preparation.
Wider View, Advanced Processing
With the right sensors – and the latest systems are integrating a growing variety of them – WAMI systems can build up ‘pattern-of-life’ information, enabling operators to assess normality and to spot anomalies. WAMI systems typically incorporate low frame-rate visible light or visible and infrared cameras that build up mosaics of smaller images automatically stitched together by software with update rates of 1 or 2Hz, creating much larger composite images.
While the ability to cover wide areas in all light levels persistently is useful in itself, the processing software adds additional value, by enabling multiple users to define areas of interest, establish ‘trip wires’ and other means of alerting them to vehicles crossing a line or a person entering or leaving a building, for example, automatically tracking them and cueing a higher-resolution, higher frame-rate asset, such as a full motion video (FMV) camera, to take a closer look.
One of the best known of these systems is the Harris CORVUSEYE family. The CORVUSEYE 1500CM, for example, features a 57 megapixel visible colour sensor and a 23 megapixel MWIR sensor. It covers a circle of persistence up to 3.1km2 in area (2km diameter) from 15,000ft. Resolution is expressed as Ground Sample Distance (GSD – the distance between pixel centres on the ground) of 0.26m for the visual camera and 0.45m for the IR camera from 15,000 feet. It packs this capability into a 15in (38cm) diameter turret measuring 19.8in (50.3cm) in eight and weighing under 100lb (45.4kg).
Logos Technologies offers lighter systems for use with small aerostats. Its day/night capable offering is the 80lb (36kg) SIMERA, which uses eight cameras to create a 440 megapixel day/night mosaic covering a claimed 113km2 from 3,000ft. Logos claims a GSD of 0.35m at 3km and 0.6m at 6km with the visible light camera, with equivalent figures of 0.5m and 0.9m for the IR camera. SIMERA also has cross-cueing capability, provides multiple viewer windows and serves up to three simultaneous users, with options to add more. It will also store three days’ worth of imagery, again with the option to expand that capacity. The sensor measures 22in (55.88cm) long, 19in (48.26cm) wide and 30in (76.2cm) high.
Sensor Cross-Cueing
Such cross-cueing capability provides opportunities for WAMI systems to grow into multi-intelligence (multi-INT) systems, by pairing them with more traditional sensor turrets. For example, Logos and Raytheon announced in early October that they are partnering to provide multi-INT capabilities by pairing the former’s WAMI systems with the latter’s Multi-Spectral Targeting System (MTS) turret and a number of other sensor types, including hyperspectral imaging, LIDAR and signals intelligence. MTS provides high-definition FMV in the visible and IR spectra combined with laser illumination and target designation in a single turret.
The concept is to enable ISR operators to use multi-sensor data to detect, recognize and identify hard-to-find targets across wide areas and in near-real time, without extensive post-mission processing, by day and night. “Combining high-resolution FMV, wide-area motion ISR and other sensor modalities delivers an unparalleled advantage in real-time processing and data exploitation,” according to Fred Darlington, vice-president for ISR systems at Raytheon’s space and airborne systems division.
WAMI on the Ground
Wide FoV thermal imaging sensors for use by dismounted troops in mobile security operations, forward operating base protection and other missions demanding persistent surveillance and automatic target detection are also emerging. Some of the latest examples include the SUPERVISIR system from Elbit Systems and Controp‘s TWISTER. Both have early target detection and investigation capabilities that can provide a decisive edge for combat forces.
The former is a cooled staring sensor in a tripod-mounted box that provides a 90×12.5° FoV and is designed to detect, track and display motion imagery of moving air, ground and sea targets, suiting it to border patrol, perimeter security, surveillance and counter-surveillance missions. As with the airborne WAMI systems cited above, its automatic capabilities include detection and extraction of target imagery feeds in multiple regions of interest, enabling visual investigation in real time and backtracking of the target to the point at which it first entered the sensor’s FoV.
A self-contained, tripod-mounted sensor, SupervisIR can also be networked with other sensors including radars and long-range surveillance and targeting systems such as Elbit’s own LONG-VIEW CR-D, a multi-sensor targeting system with laser designation capabilities. “We have taken cutting edge surveillance capabilities typically found in aerial platforms and incorporated them into a fully-persistent ground-based system”, said Elad Aharonson, General Manager of Elbit Systems’ ISTAR Division. “SUPERVISIR is not only a key advantage for detecting threats in a large area, but also provides an accessible, reliable and affordable alternative for closing intelligence and targeting cycles, all while significantly reducing size, weight, power and cost (SWAP-C).”
360° Night Vision
TWISTER takes the WFoV concept still further, with the use of a high-definition IR camera that scans continuously through 360°, updating the image over short and intermediate distances once per second. The image processing software provides automatic target tracking and detection, in this case concentrating on moving human, vehicular and aircraft targets using NATO standard definitions at ranges up to several kilometres. This algorithm set is flexible enough for use in maritime, airborne, and ground applications, Controp emphasises. Although TWISTER can cue longer ranged sensors, it features a continuous optical zoom lens of its own that can be used for target recognition and identification. As with other WAMI systems, it can take snapshots while continuing to record other activities.
Although far from a hand-held sensor, TWISTER is portable by a crew of two in specially designed backpacks and, like SUPERVISIR, can be tripod-mounted. It can also be used as a fixed installation, mounted on a pole or a tower, with the crew operating it locally from a laptop or remotely over an Ethernet cable or a radio link.
Aviation Trickledown
As they are high-end platforms, embodying cutting-edge technologies across a wide spectrum, and since they retain a large measure of prestige and glamour, jet fighters provide a useful bellwether for the kinds of capabilities that can be expected to trickle down to other, less rarefied, sectors in due course.
A good example is BAE Systems‘ STRIKER II Helmet Mounted Display (HMD), which successfully completed trials in July to demonstrate its integration with the Eurofighter TYPHOON. A further goal was to show it can be used with existing aircraft with either digital or analogue electronics. STRIKER II is significant, partly because it features an integrated digital night vision camera in place of conventional image intensified night vision goggles – a first in such a high-performance aviation application – and also because the symbology it provides amounts to an advanced augmented reality application. This dramatically improves the pilot’s situational awareness by day and night and through dusk and dawn transitions, thanks to its ability to blend camera imagery and the pilot’s direct view through its visor-projected display, accurate overlay depending on the system’s knowledge of where the pilot is looking that comes from an advanced head tracker.
Mounted just above the visor in the centre of the helmet, the monocular night vision camera features the ISIE 11 sensor from Intevac Photonics, based on Electron Bombarded Active Pixel Sensor (EBAPS) technology. This, BAE Systems emphasises, provides night vision acuity equivalent to or greater than that provided by image intensified aviator night vision goggles without the extra weight and balance problems they pose, particularly during high-g combat manoeuvring.
STRIKER II’s integration with the aircraft’s electronics is game changing; it can show targets detected by the radar – the original night vision device – or other sensors in symbolic form in an accurate position relative to the pilot’s position. BAE Systems provides an illustrative sequence to highlight the benefits: the radar detects an enemy aircraft, perhaps hidden from the pilot’s view by the airframe, and an arrow symbol projected onto the visor points down to the target symbol, enabling him or her to place a crosshair on it simply by looking at it, locking the aircraft’s weapon system onto it with a voice command. If a second hostile aircraft is detected, the pilot can lock the system on to it in the same way and decide which to engage first. STRIKER II can also accept video feeds from other cameras positioned around the aircraft, to provide the “see-through airframe” capability usually associated with the F-35.
Q-WARRIOR & BATTLEVIEW 360
Illustrating the trickledown effect, the company now offers two systems that exploit augmented reality technology to the benefit of troops on the ground: Q-WARRIOR and BATTLEVIEW 360. The former is a head-mounted display for dismounted troops, initially aimed at specialists such as forward air controllers; the second is an all-round situational awareness system with through-the-wall day and night vision capabilities.
Q-WARRIOR features a thin, low-profile holographic waveguide monocle that can be attached to a helmet to show information coming from the network via the soldier’s radio. Worn by a Joint Terminal Attack Controller (JTAC) or Forward Air Controller (FAC), it shows symbols representing aircraft or groups of aircraft overlaid on a real-world view with relevant details such as callsigns, distances, altitudes and weapon loads, along with other battlespace entities including targets and reported positions of enemy combatants, friendlies via Blue Force Tracking, non-combatants and land marks. It can also display day and night sensor feeds from a variety of surveillance and targeting platforms, including UAVs. Furthermore, it is slim enough to fit between the wearer’s eye and a personal night vision device or weapon sight.
BATTLEVIEW 360 is designed to provide very similar capabilities, tailored to the needs of armoured vehicle crews and mounted infantry squads. In this case the view of the outside world – notoriously restricted from such vehicles – can be provided by day/night capable cameras in rugged, protected external mountings. The cameras feed their imagery to a video processing module that stitches their overlapping fields of view into a seamless picture from each user’s point of view, using information from individual head trackers.
By plugging a tablet display into BATTLEVIEW 360, soldiers in the vehicle have access to the aircraft-derived digital map and battle management system, enabling them to collate, map and classify features on the battlefield. This enables them to understand their environment, increasing their combat effectiveness. It also enables the vehicle commander, gunner and driver to plan safer routes, share information more quickly and effectively, and to engage targets more rapidly.
Successfully tested on a CV90 combat vehicle, it has also been demonstrated on a BRADLEY IFV and, thanks to its open architecture, promises straightforward integration with other vehicles. The company has also integrated BATTLEVIEW 360 with a UAV, further enhancing situational awareness, particularly at night if the drone is equipped with suitable sensors.
UAV integration
Working with Thales, CMI Defence has also demonstrated the integration of a UAV control system and video feed with a combat vehicle turret, showing off the combination of its Cockerill 3105HP turret and Thales‘ new SPYRANGER mini-UAV at Eurosatory in June. The team integrated software from Thales’ SPY’C ground control system into the 3015HP’s turret network controller and ballistic computer and adapted the turret’s existing displays to control the drone and use the simultaneous day TV and infrared video feeds from its sensor turret. The UAV effectively extends the range and provides a much higher vantage point for the vehicles sensors, by day and night.
This new capability is designed to improve accuracy in indirect fire, by using the drone as a forward observer, providing target localisation and designation, first firing assessment and corrective indication in artillery mode and battle damage assessment, says the company.
Elbit IRONVISION
Elbit Systems is working on a rival to BATTLEVIEW 360 in the form of its IRONVISION system, also based on technology developed for aviation applications, in this case its own TARGO helicopter helmet. This was initially adopted for tank and APC commanders and drivers, but is intended for any armoured vehicle without windows, according to Eran Golan, Elbit’s business unit manager for tanks and APCS, who demonstrated IRONVISION in a simulated vehicle interior to the author at Eurosatory.
“When you see something that looks like a target, press here with your palm and with your pointing finger, push this red button. Then the weapon station will point at whatever you are looking at. If you want to go back to situation awareness you press this red button again,” he said, demonstrating the essence of what Elbit calls its “look, lock, launch” concept. The helmet’s situational awareness view created the impression that the “vehicle” had transparent sides while still allowing the wearer to see the system controls, the vehicle interior and observers Eran Golan and Ofir Binun, technical manager and one of the designers of IRONVISION.
Like BATTLEVIEW 360, IRONVISION shows BMS data and information relating to the vehicle, its direction of travel, true north and the driver view, according to Golan, who added that all crew members can see where the others are looking and all data relevant to their individual tasks.
It proved easy and intuitive to switch between the situational awareness and targeting views, pointing the weapon system at logos on other companies’ stands and then, after switching to the simulator training mode, pointing it at random targets. The simulation mode can be built into IRONVISION so that any vehicle equipped with it can act as its own simulator and classroom. Looking at the real outside world was comfortable, even with fairly rapid head movements, because the combination of static cameras, video processing software and robust optical head tracking with inertial back-up eliminated any latency and visual distortions that might have induced motion sickness. The inertial back-up enables the system to keep up with the wearer’s viewpoint, even if the helmet leaves the optical tracking system’s field of view, for example if a crew member opens a hatch for an external view. This ensures the weapon system can still be steered to his or her line of sight. The back-up lasts for about 10 minutes before it drifts too far and has to be recalibrated, which happens as soon as the wearer’s head returns to the vehicle so it can be seen by the optical tracker again.
Even though IRONVISION is based on technology from the TARGO helicopter helmet, it will be much cheaper because the optical and tracking systems attach to a standard ballistic helmet that comes in small, medium and large sizes with an adjustable liner, rather than the much more tailored aviator helmet with its custom-made liner. The display is comparable to a militarised GOOGLE GLASS.
The system has been shown to be effective and has been demonstrated to several potential customers. By June, Binun and his team had been working on it for a couple of months, adapting off-the-shelf helicopter technology for armoured vehicle use, ruggedising it and planning to test it in tanks during August. “The optical tracker and the zero latency algorithm developed in-house by Elbit [….] came off the shelf for our project”, he said. He emphasised that IRONVISION will connect to any BMS and any type of camera, provided it is digital and preferably high-definition. It will also work with any Remote Weapon Station (RWS) or UAV. “Any video source that can be shown on a regular display can be shown on IRONVISION,” the company stated.
While improvements in individual sensor technologies remain critically important, it is in their integration into multi-sensor, multi-INT systems and the application of advanced algorithms to their output in which the vital edge is likely to be found.
Peter Donaldson