Blade Runner—Autoencoded

Blade Runner—Autoencoded (2016) is a film produced by training an artificial neural network to ‘watch’ the film Blade Runner numerous times. Afterwards, the film was reconstructed through the memory of the neural network, giving an imperfect reconstruction of the original film.

You can read more about the work in my original blogpost or in my subsequent Leonardo paper that was presented at the SIGGRAPH ‘17 Art Papers track.

An edition of Blade Runner—Autoencoded was acquired by the City of Geneva's Contemporary Art Collection.

 
Still from Blade Runner—Autoencoded.

Still from Blade Runner—Autoencoded.

This series, part of the exhibition Dreamlands, highlights a range of cinematic approaches from optical abstraction to science fiction. These films challenge the notion of cinema as a form, as artists project into the future, document the past, disrupt the canon, and explore the limits of our senses.
The film Blade Runner (1982) reconstructed frame by frame using a type of artificial neural network called an autoencoder. This is a side by side comparison of the first 15 minuites with the original film. The autoencoder was trained to reconstruct the individual frames from the film using the learned similarity architecture proposed by Larsen et al.
Previous
Previous

(un)stable equilibrium

Next
Next

CNN Visualisation