Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  A robust and accurate facial 3D reconstruction method from images acquired by mobile devices at home for facial growth monitoring


   School of Science and Engineering

  , Prof E Trucco, Prof Peter Mossey  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

Diseases affecting facial growth require highly accurate facial 3D scan to diagnose, monitor and plan treatment of the condition. Currently, the most used technique for this involves professional equipment, such as the Vectra system, in a clinical setting with healthcare professionals simultaneously capturing multiple overlapping images of the person’s face from several calibrated cameras with different viewpoints. These images are merged into a single 3D shape with digital stereo-photogrammetry which is considered a gold standard for structure analysis in facial anthropometry with sub-millimetre accuracy.

Recently, 3D reconstruction using facial images captured by smartphone cameras has shown potential as an alternative with similar accuracy [9-12]. However, these studies were mostly carried out in clinical settings with healthcare professionals taking the images or used active depth sensors, a technology only available on high end devices. This project aims to alleviate the requirement of patients having to visit the clinic by developing a computer vision system to obtain highly accurate and dense facial 3D reconstruction from images acquired with end-user mobile devices in real world conditions, such as one’s smartphone at home through a purpose-made mobile application.

In Computer Vision, 3D reconstruction of the face from images acquired with uncalibrated cameras is a well-studied problem. Highly accurate 3D models can be obtained from multiple images with optimization-based methods relying on motion analysis and geometrical constraints [3,4]. Recently, deep learning approaches combining geometry and light properties estimation have emerged [5-7]. While they achieve impressive results and allow reconstruction from images captured by laypersons, their applicability for anatomical measurement has not been validated. Moreover, these methods do not leverage all the progress made to integrate multi-view geometry constraints in deep learning-based Structure-from-Motion [1,2].

In the proposed research, we will build upon these recent works and aim to produce an accurate facial 3D reconstruction method combining the well-established strengths of traditional geometry-based 3D reconstruction with the versatility of deep-learning-based facial reconstruction. Another important aspect of the envisaged system will be its robustness to challenging conditions due to the acquisition of the images being done by laypersons at home. This can result in blurry images, bad illumination, incorrectly positioned camera and partial occlusion of the face which can be minimised by guiding the user but not fully eliminated. Deep learning methods in Computer Vision are usually performing poorly in such conditions [8]. Embedding 3D geometry constraints will allow the system to detect parts of the image which are invalid, like filtering erroneous correspondences between images in Structure-from-Motion pipelines. The project will aim to reach the sub-millimetre accuracy required for measuring anatomical structures for facial growth monitoring while being as robust as possible to real world conditions of home acquisition.

For informal enquiries about the project, contact Dr Ludovic Magerand -

For general enquiries about the University of Dundee, contact

Our research community thrives on the diversity of students and staff which helps to make the University of Dundee a UK university of choice for postgraduate research. We welcome applications from all talented individuals and are committed to widening access to those who have the ability and potential to benefit from higher education.

QUALIFICATIONS

Applicants must have obtained, or expect to obtain, a UK honours degree at 2.1 or above (or equivalent for non-UK qualifications), and/or a Masters degree in a relevant discipline. For international qualifications, please see equivalent entry requirements here: www.dundee.ac.uk/study/international/country/.

English language requirement: IELTS (Academic) overall score must be at least 6.5 (with not less than 5.5 in reading, listening and speaking, and 6.0 in writing). The University of Dundee accepts a variety of equivalent qualifications and alternative ways to demonstrate language proficiency; please see full details of the University’s English language requirements here: www.dundee.ac.uk/guides/english-language-requirements.

APPLICATION PROCESS

Step 1: Email contact Dr Ludovic Magerand () to (1) send a copy of your CV and (2) discuss your potential application and any practicalities (e.g. suitable start date).

Step 2: After discussion with Dr Magerand, formal applications can be made via our direct application system. When applying, please follow the instructions below:

Candidates must apply for the Doctor of Philosophy (PhD) degree in Computing (3 year) using our direct application system: apply for a PhD in Computing.

Please select the study mode (full-time/part-time) and start date agreed with the lead supervisor.

In the Research Proposal section, please:

-         Enter the lead supervisor’s name in the ‘proposed supervisor’ box

-         Enter the project title listed at the top of this page in the ‘proposed project title’ box

In the ‘personal statement’ section, please outline your suitability for the project selected.

Computer Science (8)

Funding Notes

For an exceptional candidate, we can consider applying for competition funding or might be able to obtain direct funding.

References

[1] Xingkui W, et al. DeepSFM: Structure From Motion Via Deep Bundle Adjustement. Proc. of the IEEE/CVF Eur. Conf. on Computer Vision, 2020.
[2] Jianyuan W, et al. Deep Two-View Structure-from-Motion Revisited. Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2021.
[3] Agrawal S, et al. High Accuracy Face Geometry Capture using a Smartphone Video. Proc. of the IEEE/CVF Winter Conf. on Applications of Computer Vision, 2020.
[4] Booth J, et al. 3D Reconstruction of “In-the-Wild” Faces in Images and Videos. IEEE Trans. on Pattern Analysis and Machine Intelligence (Vol. 40, Issue: 11, Nov. 2018)
[5] Yandong W, et al. Self-Supervised 3D Face Reconstruction via Conditional Estimation. Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021.
[6] Abdallah D, et al. Towards High Fidelity Monocular Face Reconstruction with Rich Reflectance using Self-supervised Learning and Ray Tracing. Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021.
[7] Tianye Li, et al. Topologically Consistent Multi-View Face Inference Using Volumetric Sampling. Proc. of the IEEE/CVF Int. Conf. on Computer Vision, 2021.
[8] Drenkow N et al. Robustness in Deep Learning for Computer Vision: Mind the gap? In ArXiV 2021.
[9] Nightingale RC, et al. A Method for Economical Smartphone-Based Clinical 3D Facial Scanning. Journal of Prosthodontics, 2020;29(9):818-825.
[10] Mai H and Lee D. Accuracy of Mobile Device–Compatible 3D Scanners for Facial Digitization: Systematic Review and Meta-Analysis. Journal Medical Internet Research, 2020.
[11] Gallardo Y, et al. Evaluation of the 3D error of 2 face-scanning systems: An in vitro analysis. Journal of Prothestic Dentistry, 2021.
[12] Salazar-Gamarra R, et al. Monoscopic photogrammetry to obtain 3D models by a mobile device: a method for making facial prostheses. J. Otolaryngol Head Neck Surg. 2016;45(1):33.

Register your interest for this project



Where will I study?

Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.