PhD Thesis

Expanding the Generative Space: Data-Free Techniques for Active Divergence with Generative Neural Networks

2024

Goldsmiths, University of London

Available to download at: https://github.com/terrybroad/thesis

Abstract:

Generative neural networks offer powerful tools for the generation of data in many domains, given their ability to model distributions of data and generate high-fidelity results. However, a major shortcoming is that they are unable to explicitly diverge from the training data in creative ways and are limited to fitting the target data distribution. This thesis presents a body of work investigating ways of training, fine-tuning, and configuring generative neural networks in inference in order to achieve data-divergent generation. This goal of configuring generative neural networks to diverge from their original training data or any existing data distribution is referred to as active divergence. All of the approaches presented in this thesis are data-free in their implementation, which inherently distinguishes these approaches from the traditional orthodoxy of imitation-based learning that is widespread throughout most machine learning research. The research presented in this thesis represents three categorical contributions to achieving active divergence: training without data, divergent fine-tuning, and network bending. In addition to this, a formal survey and taxonomy of active divergence methods is presented as another contribution of this thesis. The overriding goal of the research in this thesis is to expand the generative space of generative neural networks. All three methods presented achieve this, and point to a new approach to working with generative neural networks that does not rely on the imitation of, and derivation from data, for extracting its value and creative possibilities.

Next
Next

Network Bending