File(s) under permanent embargo
Towards autonomous image fusion
conference contributionposted on 2010-01-01, 00:00 authored by Mohammed Hossny, Saeid NahavandiSaeid Nahavandi, Douglas CreightonDouglas Creighton, Asim BhattiAsim Bhatti
Mobile robots are providing great assistance operating in hazardous environments such as nuclear cores, battlefields, natural disasters, and even at the nano-level of human cells. These robots are usually equipped with a wide variety of sensors in order to collect data and guide their navigation. Whether a single robot operating all sensors or a swarm of cooperating robots operating their special sensors, the captured data can be too large to be transferred across limited resources (e.g. bandwidth, battery, processing, and response time) in hazardous environments. Therefore, local computations have to be carried out on board the swarming robots to assess the worthiness of captured data and the capacity of fused information in a certain spatial dimension as well as selection of proper combination of fusion algorithms and metrics. This paper introduces to the concepts of Type-I and Type-II fusion errors, fusion capacity, and fusion worthiness. These concepts together form the ladder leading to autonomous fusion systems.