Disaster management, as a hybrid research area, has attracted the attention of many research communities such as business, computer science, health sciences, and environmental sciences. According to federal emergency management agency policy, there are two main categories of disaster: (1) Technological such as emergencies related to hazardous materials, terrorism, and nuclear power plants etc., and (2) Natural such as floods, earth quakes, and forest fires etc. Regardless of the nature of the disaster, certain characteristics are necessary for effective management of almost all of them. These features include prevention, advance warning, early detection, early notification to the public and concerned authorities, response mobilization, damage containment, and providing medical care as well as relief to affected citizens. Disaster management has four main phases including preparedness, mitigation, response, and recovery, each of which requires different types of data, which are needed by different communities during disaster management. Such data can be processed using data analysis technologies such as information extraction, information retrieval, information filtering, data mining, and decision support. CCTV cameras can be helpful for early detection of different disasters such as fire and flood, which in turn can facilitate disaster management teams in quick recovery and reducing the loss of human lives. Fire disasters mainly occur due to human error or the failure of a system, causing economic as well as ecological damage along with endangering human lives. Wildfire disasters alone in the year 2015 resulted in 494,000 victims and caused damage worth US$ 3.1 billion. Each year, an area of vegetation of 10,000 km2 is affected by fire disasters in Europe.
Two broad categories of approach can be identified for fire detection: 1) traditional fire alarms and 2) vision sensor-assisted fire detection. Traditional fire alarm systems are based on sensors that require close proximity for activation, such as infrared and optical sensors. These sensors are not well suited to critical environments and need human involvement to confirm a fire in the case of an alarm, involving a visit to the location of the fire. Furthermore, such systems cannot usually provide information about the size, location, and burning degree of the fire. To overcome these limitations, numerous vision sensor-based methods have been explored by researchers in this field. These have the advantages of less human interference, faster response, affordable cost, and larger surveillance coverage. In addition, such systems can confirm a fire without requiring a visit to the fire’s location, and can provide detailed information about the fire including its location, size, degree, etc. Despite these advantages, there are still some issues with these systems, e.g., the complexity of the scenes under observation, irregular lighting, and low-quality frames; researchers have made several efforts to address these aspects, taking into consideration both color and motion features.
There is a need to find a better trade-off between these metrics for several application scenarios of practical interest, for which existing computationally expensive methods do not fit well. To address the above issues, this project aims to investigate convolutional neural network (CNN)-based deep features for early fire detection in surveillance networks.
The ideal candidate should have good knowledge on machine learning. Good programming skills are desirable.