Project Highlights
To investigate novel AI models for producing automated digital twins with embedded physics in the learning process
To investigate digital twins that learn physics from the data
To investigate self-learning digital twins that can learn from the operational environment and can evolve as a result of new observations
Project description
This is a collaborative research project between the University of Leicester, BT and Bloc Digital. BT will offer the expertise and data in monitoring large scale infrastructure projects such as 5G deployment and network planning. Bloc Digital will offer expertise in virtual and augmented reality and its use in producing digital twins. University of Leicester will research and develop next generation AI algorithms that can produce Physics informed digital twins in real time from vast amounts of data using HPC and in-memory Clouds.
The ability to visualize a system outcome before it happens is extremely valuable and digital twins enable this to happen using virtual reality models. Using digital twins, engineers can design better products while users can see final products pre-production, ultimately saving everyone time and resources.
Currently it takes weeks and months to produce digital twins because the process is manual and time consuming. This project aims to investigate an automated digital twin model that can learn physics from engineering as well as infrastructure data models and iteratively evolve as new data and insight become available over time. This will necessitate creation of new physics informed AI models and algorithms that can learn physics from engineering models and map the objects in data sources to virtual reality components with minimum computational cost. This will require building automated mesh creation models, developing correspondence between high and low polygon meshes as well as the development of algorithms for automatic model visual quality assessment. The resulting digital models will be put into practice to generate data that will be realistic emulation of the experimental environment and will be used to self-learn and optimize the digital twins to represent evolving functionality in the physical world.
These models and algorithms should have the capability to exploit high performance computing for real time model execution for large scale immersive design and analytics. Dynamically integrating data, algorithms & analytics to a VR environment needs piecing hundreds of thousands of objects together and will need careful understanding of geometrical and system models to minimise the computational burden. To overcome this problem, we will use high performance in-memory systems where we can store the components of the models in a distributed shared memory while the algorithms are processing the data for their models.
Methodology
1. We will create a large database of objects from distributed data sources that will be converted into low polygon meshes using a set of physics informed machine learning algorithms. The resulting meshes will become the building blocks for training and learning the low-overhead reinforcement and transfer learning algorithms leading to automated production of digital twins. We will employ distributed machine learning approaches to parallelize the learning process.
2. We will investigate a new machine learning-based visual quality assessment models which will combine geometric and perceptual metrics to determine if a given mesh of an object in a digital twin has been unacceptably distorted. This will be achieved by a correspondence process that will be mathematical model to map a high number of polygons in a 3D object to low number of polygons that can be efficiently rendered.
3. The resulting meshes of the digital objects as well as the visual quality assessment model will be integrated in a single digital model that will be put into practice to generate data. The data produced will be a realistic emulation of the experimental environment and will be used to self-learn and optimize the digital twins to represent evolving functionality in the physical world.
4. The distributed machine learning algorithms will be optimized to make use of distributed in-memory Clouds, GPUs and HPC resources. New scheduling, resource management and IO based approaches will be introduced to run these algorithms in real time, leading to the production of automated real time digital twins from large datasets.
Eligibility:
UK and International* applicants are welcome to apply.
Entry requirements:
Applicants are required to hold/or expect to obtain a UK Bachelor Degree 2:1 or better in a relevant subject or overseas equivalent.
The University of Leicester English language requirements may apply.
To apply
Please refer to the information and How to Apply section on our web site https://le.ac.uk/study/research-degrees/funded-opportunities/cms-gta
Please ensure you include the supervisor and project title on your application.