The next generation Internet-of-Things (IoT) systems are becoming an essential part of our day-to-day environment. Smart cities, smart manufacturing, augmented reality, industry 4.0, and self-driving cars are just some examples of the wide range of domains, where the applicability of such systems, is extremely impactful due to the fact that the huge amount of generated data could be used in order to train AI models to make dynamic reconfiguration decisions. Such systems have to deal with versatile requirements such as the need for high-bandwidth, privacy-sensitivity, context-awareness (e.g., time/location awareness) and simultaneously require access to geographically distributed arrays of sensors, remote localised computational resources of heterogeneous natures, as well as large-scale on-the-fly multi-cloud computational resources. The afore-mentioned requirements gave birth to new computing model called Cloud-to-Things computing continuum – where computation, storage, data management and decision making occur along the path of edge devices closer to source and the cloud.
The natural extension from cloud model to Cloud-to-Things continuum has significantly increased the complexity for the orchestration of application services, where the orchestration solutions are now expected to operate in multi-administrative environments and have to deal with the associated challenges such as heterogeneity, volatile connectivity, mobility, liaise with system-specific contextual constraints, resource-constrained computational capabilities, volatile changing environment, versatile security threat scenarios, application performance monitoring at different levels, etc. The existing landscape of Cloud-to-Things orchestration include solutions like (1) resource provider solutions such as Amazon GreenGrass, Microsoft’s Azure IoT core and Google’s Cloud IoT Core - suffer from vendor lock-in, (2) low level infrastructure solutions such as KubeEdge, Kubefed - require low level specific infrastructure knowledge or (3) reference architectures such as ENORM, Foggy - lack various essential building blocks such as specification layer, runtime reconfigurability, etc.
This PhD project aims to evolve the concept of Cloud-to-Things orchestration towards developing a uniform vendor (and technology)-agnostic orchestration system, that facilitates the automated deployment and runtime management of next-generation intelligent IoT systems across the Cloud-to-Things ecosystem simultaneously using a single model-driven specification of the input system, and where the deployment and runtime reconfiguration decisions are driven by an intelligent context-aware engine. The key aspects of this project include:
- Design of a high-level modular architecture for an application-level Cloud-to-Things orchestration framework,
- Ensuring interoperability across different cloud (resource) providers,
- Efficient connectivity and management of volatile Non-cloud (Fog/Edge) resources,
- Integration of AI driven context-aware mechanism to formulate run-time reconfiguration decisions,
- Effective and secure coordination across the various building blocks of system in the Cloud-to-Things continuum.
A first degree (at least a 2.1) ideally in Computer Science, Engineering with a good fundamental knowledge of Software Engineering, Computer Programming, cloud technologies, and machine learning.
English Language Requirement
IELTS score must be at least 6.5 (with not less than 6.0 in each of the four components). Other, equivalent qualifications will be accepted. Full details can be found online https://www.napier.ac.uk/research-and-innovation/research-degrees/application-process
- Experience of fundamental software engineering
- Competent in one (or some) programming languages
- Knowledge of Cloud, IoT, and Microservices architecture
- Good written and oral communication skills
- Strong motivation, with evidence of independent research skills relevant to the project
- Good time management