FindAPhD Weekly PhD Newsletter | JOIN NOW FindAPhD Weekly PhD Newsletter | JOIN NOW

Mixtures of Generative Deep Learning Networks

   Department of Computer Science

This project is no longer listed on and may not be available.

Click here to search for PhD studentship opportunities
  Dr A Bors  Applications accepted all year round  Self-Funded PhD Students Only

About the Project

The main generative deep learning models are the Variational Autoencoders (VAE) [1] or Generative adversarial networks (GANs) [2]. While the former encodes a latent space of representations, the latter plays a min-max game aiming to generate realistic data (images). The mode collapse problem [3], is a known problem which limits the diversity of generated data in GANs. New processing capabilities become available when combining generative models into mixtures [4]. Mixtures of generative networks allow exploring the joint representations of the latent spaces of sets of VAEs or addressing the mode collapse problem in GANs. Mixtures of GANs was studied in [5] and those of VAEs in [6], where new latent spaces of data representations are explored through interpolations between the data associated with the latent spaces of different networks components. Mixtures of generative deep learning networks were developed for the lifelong learning of sequences of databases [7]. Other approaches use hierarchical structures, such as non-parametric approaches [8], GAN-trees [9] or clustering in the data generative space (latent space) [10]. Criteria controlling mixture expansion was analysed in [11].

This project will develop a mixture of generative models which will enable new data representations, by minimizing the number of components and computation used.

Objectives: defining mixtures of generators, loss functions for mixtures of generative deep learning models, criteria for adding new components to the mixture, minimizing the total number of parameters used, improving the quality of generated images, optimizing computational resource allocation.

Research areas: Deep Learning; Computer Vision and Image Processing; Neural networks.

Applications: Recognition and Classification, Understanding data and its latent representations

The candidate should be familiar or willing to learn about deep learning tools such as PyTorch or TensorFlow.


[1] D. P. Kingma and M. Welling, M., Auto-encoding variational Bayes, 2013.
[2] I. Goodfellow, et al., Generative adversarial nets, Proc. NIPS, 2014, pp. 2672-2680.
[3] A. Srivastava, et al., Veegan: Reducing mode collapse in GANs using implicit variational learning, Proc. NIPS, 2017, pp. 330-3318.
[4] H. Eghbal-zadeh, W. Zellinger, G. Widmer, Mixture Density Generative Networks, Proc.CVPR, 2019, pp. 5820-5829.
[5] T. Karras et. al. Progressive growing of GANs for improved Quality, Stability, and Variation, Proc. ICLR, 2018,
[6] Y. Fei, A. G. Bors, Deep Mixture Generative Autoencoders, IEEE Trans. on Neural Networks, 2021, DOI: 10.1109/TNNLS.2021.3071401
[7] Y. Fei, A. G. Bors, Lifelong Mixture of Variational Autoencoders, IEEE Trans. on Neural Networks, 2021, DOI:: 10.1109/TNNLS.2021.3096457
[8] L. Dinh et al., A RAD approach to deep mixture models, 2019,
[9] J. N. Kundu et al. GAN-Tree: An Incrementally Learned Hierarchical Generative Framework for Multi-Modal Data Distributions, Proc. ICCV, 2019, pp. 8191-8200.
[10] S. Liu et al. Diverse Image Generation via Self-Conditioned GANs, Proc CVPR 2020, pp. 14286-14295.
[11] Y. Fei, A. G. Bors, Lifelong Infinite Mixture Model Based on Knowledge-Driven Dirichlet Process, Proc. ICCV 2021.

How good is research at University of York in Computer Science and Informatics?

Research output data provided by the Research Excellence Framework (REF)

Click here to see the results for all UK universities
Search Suggestions
Search suggestions

Based on your current searches we recommend the following search filters.

PhD saved successfully
View saved PhDs