An idea is taking shape: transposing DeepDream to text (see this page for a break-down of what happens in deep neural nets, the foundation for DeepDream images).

It could be called ‘DeepSpeak’ (Orwellian wink).

What we need is:

  • a neural network (probably LSTM, but features of ConvNets might be useful here) that can learn features from text input (and hopefully expand on this bit of research;
  • the ability to see what is happening at a neuronal level, and tweak certain neurons so that they start activating more often or differently; B
  • a way of outputting text with the results of the tweaked network, so as to produce a ‘hallucinating’ result in the same way as DeepDream does with images: the network would either start twisting and inflating specific parts of a text in particular ways, e.g. transforming the vocabulary, adding sentences or phrases, or perhaps creating words on the fly (Finnegans Wake comes to mind).

Treat text like an image (which we read as computer read images: element-wise, linearly): each symbol as a pixel, the whole text as one object of a certain size (total number of characters). Perfectly possible to translate symbols into numbers (which already happens under the hood). The entire text (poem, novel, play, etc.) seen in one snapshot by the network — that is, in the same way as a network ‘sees’ an image, and learns from it. It might be an interesting way of learning about larger ‘areas’ of text (e.g. aself-contained scene, departure/arrival of characters or topics, the detection of which could work in the same way as the detection of edges and shapes in ConvNets for CV). Direct links to LDA to detect the overall topic, atmosphere, characters or intrigue in a passage.

My knowledge of networks is still incipient. I would need to learn more, a lot more, about ConvNets, LSTMs, and related matters.

Two interesting articles, that should lead to further research:

As well as a talk summarizing the contents of the already mentioned Unreasonable Effectiveness of Recurrent Neural Networks.