Don't miss our weekly PhD newsletter | Sign up now Don't miss our weekly PhD newsletter | Sign up now

  Intelligent Character Creation for Video Games


   Centre for Accountable, Responsible and Transparent AI

This project is no longer listed on FindAPhD.com and may not be available.

Click here to search FindAPhD.com for PhD studentship opportunities
  Prof Darren Cosker  No more applications being accepted  Competition Funded PhD Project (European/UK Students Only)

About the Project

Current video games allow fine grain shape / colour selection of multiple features (hair, mouth, eyes, tattoos, etc). Examples include Star Citizen, Fallout 4, Black Desert, Destiny, Xenoblade Chronicles X. These interfaces can be time consuming and make it difficult to visualise and ‘browse’ multiple options intelligently or be shown options the player may not have thought of - unless through a random generator which ‘seeds’ the appearance of the character and allows the user to edit from that point. In addition, building assets for video game characters can take professional artists days of highly skilled labour.

In the research community, Generative Adversarial Networks (GANs) have recently been shown to produce highly impressive results in image synthesis and exploration (e.g. [CycleGAN, CoGAN, PoseGAN]). This has also led to attempts to synthesise stylised Anime characters [AnimeGAN]. However, while there is promising work on the semantic control of these spaces (e.g. controls for generating faces with controls for smile intensity, hair colour) [CoGAN], these techniques – as well as other GAN-based image synthesis methods - have yet to be applied to the domain of 3D character models or character creation.

In this project, we will investigate the use of AI to enable rapid character and asset creation for personalised highly stylised avatars or for high quality rapid character creation for e.g. video games. These systems will have semantically meaningful controls for e.g. shape, size, style and appearance, and will allow users to guide the design of the character with additional data, e.g. upload photographs, concept art or even voice audio to personalise or rapidly accelerate character creation.

This project is associated with the UKRI CDT in Accountable, Responsible and Transparent AI (ART-AI), which is looking for its first cohort of at least 10 students to start in September 2019. Students will be fully funded for 4 years (stipend, UK/EU tuition fees and research support budget). Further details can be found at: www.bath.ac.uk/research-centres/ukri-centre-for-doctoral-training-in-accountable-responsible-and-transparent-ai/.

Desirable qualities in candidates include intellectual curiosity, a strong background in maths and programming experience.

Applicants should hold, or expect to receive, a First Class or good Upper Second Class Honours degree. A master’s level qualification would also be advantageous.

Informal enquiries about the project should be directed to Prof Darren Cosker: [Email Address Removed].

Enquiries about the application process should be sent to [Email Address Removed].

Formal applications should be made via the University of Bath’s online application form for a PhD in Computer Science: https://samis.bath.ac.uk/urd/sits.urd/run/siw_ipp_lgn.login?process=siw_ipp_app&code1=RDUCM-FP01&code2=0013

Start date: 23 September 2019.


Funding Notes

ART-AI CDT studentships are available on a competition basis for UK and EU students for up to 4 years. Funding will cover UK/EU tuition fees as well as providing maintenance at the UKRI doctoral stipend rate (£15,009 per annum for 2019/20) and a training support fee of £1,000 per annum.

We also welcome all-year-round applications from self-funded candidates and candidates who can source their own funding.

References

[AnimeGAN] Towards Automatic Anime Character Creation with Generative Adversarial Network’. Jin et al. NIPS 2017 workshop on Machine Learning for Creativity and Design. https://arxiv.org/pdf/1708.05509.pdf,
[PoseGAN] Pose Guided Person Image Generation. Ma et al. NIPS 2017. https://arxiv.org/pdf/1705.09368.pdf
[CycleGAN] Unparied Image to Image Translation using Cycle-Consistent Adversarial Networks, Zhu et al. ICCV 2017. https://arxiv.org/pdf/1703.10593.pdf
[CoGAN] Coupled Generative Adversarial Networks, Liu and Tuzel, NIPS 2016, https://arxiv.org/pdf/1606.07536.pdf,

Where will I study?