This piece highlights the importance of well engineered systems to extract and move the right information.
Cloud Front Group is known for their work with the best available American technologies including capabilities from Saratoga Data, Thetus, piXlogic, MetaCarta and others. Cloud Front Group has also built industry leading capabilities to create knowledge over raw data and leverage existing enterprise capabilities to serve DoD and IC missions.
Highlights of this article included:
Today, UAVs alone are producing so much video feed that the intelligence community is finding it increasingly challenging to analyze and draw conclusions to determine “actionable intelligence.”
For one, there is the issue of knowing exactly where and when to look, especially since the space beneath an aircraft’s flight path can vary between 25 to 100 square kilometers. With traditional EO/IR video, aircraft are limited to watching approximately 1 percent of that area with adequate resolution.
With the growing influx of data that is being produced by UAVs, it has become increasingly necessary to bring computers into the loop, noted Jonathan “Michael” Ehrlich, product manager, GEOINT Enterprise Solutions, ITT Exelis Geospatial Systems. “The quantity of data being generated requires computer systems not only to provide a means to collect, store and distribute the data, but also to analyze and catalog the video data for analysts and decision makers,” he said.
“Functionally, information overload is a real concern for operators. All relevant data input must be matched with support for interpreting, correlating, summarizing and visualizing the data against the existing knowledge base, transforming raw bits of data into actionable intelligence,” commented John Mackay, president and chief executive officer of Cloud Front Group.
Video is quickly becoming the most indemand sensor intelligence on the battlefield, making the ability to transport and mine it a top priority. Video data has been increasing in a variety of ways, including the quantity of sensors and platforms, types of sensors, and the resolution and the frame-rate of data acquired.
From Platform to Analyst
The increased adoption of UAVs has reduced the cost of data acquisition operations, allowing more frequent and longer duration missions to be executed. But the challenge with UAV video is to quickly and efficiently get it from the platform to the analyst, while also giving analysts the ability to quickly identify priority intelligence requirements in near real-time or later without having to watch hours of unchanging video.
In addition, video requires significant bandwidth to deliver, which places demanding network requirements on real-time and tactical applications.
The Cloud Front Group has put together an integrated package of technologies to solve the problem of quickly disseminating relevant video data to tactical or real-time operators. “To achieve this goal, imagery object recognition and search software from piXlogic is used to scan captured video for notions of interest such as certain vehicles or people,” Mackay explained.
Once identified, the segment of video surrounding the identified notion is immediately routed to any subscribed operator using Flume, an advanced file transfer and synchronization software solution from Saratoga Data Systems.
“The entire sensor feed continues to be recorded and can be downloaded once its mission completes, but real-time operations can be positively affected by this dissemination of relevant segments, requiring much less bandwidth and providing more resilience to network challenges than actual video streaming,” he said.
The ability to collect, store and distribute metadata, as well as tag or mark events or sequences of interest, greatly assists in working with the large quantity of video feeds collected.
Wide area motion imagery (WAMI) and other large volume data sources require an even larger number of analysts to monitor activities if the current paradigm is extended. WAMI, unlike full motion video (FMV), is high-resolution imagery over large ground footprint areas for long periods of time, allowing persistent surveillance over city-scale regions, and enabling intelligence to be gathered from motion patterns and locations of many simultaneous targets over a large region of interest.
For this concept of operations to work, the video processing must occur as close to real-time as possible. This temporal proximity requirement suggests that the video processing needs be in physical proximity to the collection device in order to reduce transmission delay. The reliability of such recognition improves with the quality of the imagery being processed, which generally translates to being closer to the capture source so that compression, recoding and transmission do not degrade the data. Hence, the video processing system must be deployed on the same local area network as the sensor itself.
“Using our UAV example, the UAV should have on-board object recognition processing as well as video capture capabilities,” Mackay described.
The information provided by the alert must include the sensor data that caused the alert. This video segment then needs to be delivered with the highest-possible resolution to ensure operators can interpret the video accurately and make the best recommendation for the success of the mission.
Given the network challenges in tactical environments, an efficient compression and transmission protocol must be leveraged to provide the maximum possible video resolution over the available network conditions. Such a protocol must be resilient against network latency, intermittency, and the error rates common in tactical conditions so that missions can rely on the alerts being delivered.
“To address this technology challenge we use Saratoga Data System’s Flume, a 100 percent software solution to file transfers,” Mackay reported. “In recent testing by the Air Force and Army, Flume has proven significant improvement over standard network file transfer protocols.”
For more see: ”Analysis for Action”