After weeks of stagnation at the end of session 2 of Parag Mital’s course on Kadenze, some progress, but only until half of session 4, dealing with the nitty-gritty of Variational Autoencoders… Steep, but after setting myself to copy every single line of code in these Jupyter Notebooks, the whole verbosity of TensorFlow becomes a little less alien.

I should mention that all this learning has been made far more possible after having followed Rebecca Fiebrink’s courses at Goldsmiths and her Machine Learning for Musicians and Artists on Kadenze. That offered a first basis for the work at hand.

One of the main issues I encounter is that I don’t manage to find a direct application, or a direct way of implementing, my own projects, using these techniques. I am learning them, slowly, but in the abstract, as it were. These attempts still remain dsconnected from actual artistic practice. This is reinforced by the heavy focus on visuals (and, to a lesser extent, music and performance) in most of what I see around in the ‘neural art’ scene.

Still, the use of Generative Adversarial Networks (GANs) for artistic practice is promising, and it would be good to be able to apply that to text. Mital is collaborating with Refik Anadol on GAN-generated music and animation at the moment, see this and that tweet for instance, as well as the following video: