Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Visual Internet of Things: Event-based Vision meets Artificial Intelligence under the constraints of IoT devices


   Department of Computer Science

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Dr Nabeel Khan  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

Internet of Things (IoT) framework has continued to evolve; initially IoT systems comprised of lots of small closed networks, but this concept has evolved to incorporate larger more connected networks, for instance smart cities with smart transport infrastructures. There have been continued developments of the IoT framework, but these rarely include visual data – mainly because of the high bandwidth and high-power consumption of the visual capturing devices. Since sight is our most powerful sense, combination of visual sensors with other IoT data streams and adding machine analytics would make it immensely valuable.

Dynamic Vision Sensors (DVS) are based on the principle of biological sensing, i.e., they report only the on/off triggering of brightness in the observed scene. Differently from frame-based cameras, where frames are acquired at regular time intervals, DVS asynchronously acquire pixel level light intensity changes, with a time resolution up to a microsecond. Events are triggered whenever there is either motion of the neuromorphic vision sensor or motion / change of light conditions in the scene or both. In other words, no data is transmitted for stationary vision sensors and static scenes. These unique properties enable neuromorphic vision sensors to achieve low bandwidth, wide dynamic range, low-latency, and low power requirements. These unique characteristics of the DVS offer advantages over conventional vision sensors in real-time interaction systems such as robotics, drones, and autonomous driving. Furthermore, this sensing technology has the potential to meet the low bandwidth and low power requirements of the IoT framework.

The project would explore the feasibility of three potential solutions for Visual Internet of Things:

Cloud or Edge based processing of visual data: In the near future, most of the envisaged services performing object/gesture recognition or classification will be performed via cloud / edge computing. Therefore, these services would require the transmission of the DVS events to cloud / edge servers for the processing of visual data. Such a solution would involve application-based compression (lossy or lossless) of the DVS data at the host device.

Processing of visual data at the host device: This solution involves performing machine analytics at the resource constraint host device. Since machine analytics involve high processing, the feasible solution would involve novel less computationally intensive machine analytics strategies.

Hybrid approach: The third solution involves the combination of the two approaches, where less intensive processing is performed at the host device and the encoded visual data is sent to the cloud/edge servers to perform further processing.

How to apply

Please click here for more information on how to apply.

Computer Science (8) Mathematics (25)

References

N. Khan, K. Iqbal and M. G. Martini, "Time Aggregation based Lossless Video Encoding for Neuromorphic Vision Sensor Data," in IEEE Internet of Things Journal, doi: 10.1109/JIOT.2020.3007866.
N. Khan, K. Iqbal and M. G. Martini, "Lossless Compression of Data From Static and Mobile Dynamic Vision Sensors-Performance and Trade-Offs," in IEEE Access, vol. 8, pp. 103149-103163, 2020, doi: 10.1109/ACCESS.2020.2996661.
N. Khan and M. G. Martini, ‘‘Bandwidth modelling of silicon retinas for next generation visual sensor networks,’’ Sensors, vol. 19, no. 8, p. 1751, Apr. 2019.
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

 About the Project