Recursus is a bit of a monster. Two of its arms, WordSquares and Subwords, are quite similar in scope and nature. Yet including only those would not have been satisfactory. Rather early on in my Oulipo-inspired research into words and letter constraints I felt I was hitting a wall, albeit perhaps an internal one. The conclusion that my approach using these constraints is not free enough. It does foster practice, which had been somewhat flailing beforehand, but it corners me into finding word combinations, extremely constrained lexical clusters, the evocative powers of which I am then set to maximize. This leads to asyntactic poetics, where the poetic, literary space exists in the combined meanings of the words, without almost any regards to their arrangement. This works to some extent, and I manage to dream these spaces, if the words are well-chosen enough. However the drive, the urge for syntax, for more than a crafted complex of words, remains, and often I find myself trying to find primitive sentences, or phrases, within the squares or subword decompositions I am examining. AIT is the missing piece, the fledgling link to a restored sense of linearity and development. On its own, it would have been computationally much weaker. Despite my efforts in gathering knowledge of machine learning, and deep learning in particular, reaching a sense of mastery and ease when developing a creative framework remains elusive. From my experience, this is the typical symptom of a phase of assimilation. Seemingly no light at the end of the tunnel, despite active boring, until unexpected advances and familiarity kick in, and code, like writing perhaps, comes to the fingers. The creative interaction with the product of neural text generation is also closer to literary practice as commonly understood, which is an alleyway I am set to explore in the future. Recursus, thus, as its name suggests, is both a return (to literary practice: writing, editing, imagination with constraints) as well as a step forward, the transition from a more straightforward, perhaps classical approach of computational literature (where the element of computation might have an overbearing importance on the finished pieces), to a less defined, but hopefully more fertile ground of symbiosis between the writing and the machine.

I will not go into a detailed discussion of Subwords, as the process is the same as with WordSquares, modulo a few details, and without the difficulties I encountered with the latter. To my surprise, evocative, interesting subword decompositions were easy, rather than hard, to find, and the batches shown are only a few of the possibilities present in the databases generated so far. Unlike with WordSquares, I had to stop searching and arbitrarily decide, given the time at hand, that I had to leave other possibilities to future research. Instead, I will focus on the two antagonistic practices developed here:

  • WordSquares, where constraint levels are extremely high (due to vast numbers of results with looser ones)
  • AIT, which offers new forms of constraints, a new balance in my interaction with them.

 


 

WordSquares

WordSquares, as Subwords and Wordlaces, a project not included here, is produced following a twofold movement: first, the establishment of a constraint, implemented into a program which produces a space of results, a database; second, an exploration and selection of singular, salient elements within said space. The idea is that it is 1) very difficult to write these forms manually (find squares, subwords, etc.), but also 2) that it is very difficult to know in advance which combinations will be intriguing, beautiful, strong, etc. Which will be endowed with literary worth, and deserve to be read or studied. These two factors led to the idea of the complete space, the database, that can be explored, where jewels might lie, among the rubble. And rubble there is. The core impediment, and quite the discovery, is that once the hurdle of database building has been overcome, the mining itself can be just as problematic. My expectations were, of course, that there would be dross, useless or nonsensical bits and pieces littering the space, and that mining work would be about finding special elements hidden in there. I hadn’t realised quite how vast the dross could be. First, I did not realise how gigantic the amount of possible squares could become. Using a list of 4-lettered words containing 4k+ items, the unconstrained algorithm, that does not care about words present in the diagonals, started churning millions of squares. I stopped the computation at around the letter ‘d’ (it goes through first words in the first row one after the other, attempting to build squares ‘under’ them), with nearly 30 million squares in my file (a text file of 1.4 Gb). Seeing how unwieldy it could be to manage such a large dataset, I set myself to introduce the diagonal constraint. It did reduce the amount to a bit more than 200k squares for four letters, and 750k for five1. Then came the mining. To my surprise, it proved exceedingly difficult to find squares in those two lists that met my standards of meaningfulness and evocative powers. Most of them felt imperfect, resisting attempts to treat them as ‘crafted’ objects, as I would have wished: there was always this or that word in the set that really did not match with the rest, or did not bring me any satisfaction. As I spent more hours mining, I often felt despondent, thinking I was cornered between lowering my aesthetic standards (presenting ‘unworthy’, or even worthless, pieces) or giving up on finding anything at all. I must say here that despite earlier attempts to apply visualisations and machine learning techniques to the whole process here, I am yet to develop an effective work pipeline that can integrate systematic research and visualisation tools, account for my literary sensitivity and personal preferences and, most of all, include the crucial factor of discovery and randomness required, as I remain hopelessy unable to articulate what a good piece is, reduced as I am to point to the ones of value, and discarding the unwanted others.

This ended up creating a specific kind of tension, that I am sure people like Georges Perec must have felt when working with constraints: the system enables you to explore new spaces, to go beyond what your ‘natural’ imagination would have come up with, yet at the same time forbids, in a way that is far more rigid than moral or legal interdictions, certain alleyways and results, leaving you in an unpleasant, if not entirely sterile position of ‘take it or leave it’. The squares exhibited here are the result of that: more often than not going too far into nonsense, or at least an ‘understructured’ area perhaps beyond meaing, as well as using rare and dialectal forms, whilst also having been selected among hundreds of thousand of others, sometimes painstakingly, despite these shortcomings, precisely because they do exhibit word combinations that kindle, I hope, a spark of the literary flame. It is up to the reader to decide for itself whether the line trodden has been adequate, satisfactory, or a failure.


AIT

AIT, the module using neural text generation, follows a not too dissimilar logic of coming and going, exile and return. This time we start with a ‘classical’, pen-and-paper writing practice, the product of which is fed to a network. The network once trained can output virtually any amount of new text that should be, in principle, of the same nature of the original, without being a copy. This stage corresponds, in my view, and perhaps unlike other attempts to work with machine learning to produce text I have seen, to stage two, the database, in my formal projects: it is only a stage, and must be transcended, at least until the advent of the singularity. Indeed, what we obtain using machine learning is, to my knowledge, greatly insufficient for the aims and ideals of literature (in fact often the texts are plain boring, especially as so few writers engage with these technologies). While it is in my projects to work more closely with machine learning for text generation (and its ‘precursors’, Markov chains, or context-free grammars), in the present state it seems to me far more fruitful to treat the results thus coined as material, rather than finished works. Material, that is, something on which to build, to write, something that spurs the mind and the heart to explore new paths, but that remains more dead than not without an active intervention on the part of a creative subject. Once this view was adopted, the output of the networks stopped being frustrating and started being a truly promising pathway for a symbiotic relationship with the machine, ‘cywriting’, or even ‘cyting’, as a nod to cyborgs, where the mechanical and the organic, the subjective and the generated intertwine productively.

It was a remarkable moment to discover that this strange stream of words and letters (sometimes even including sensical wordplay!) could be read as improvable, editable text, where meaning, images, coherence could be sought, under a renewed ‘suspension of disbelief’ contract. In this new framework, I choose to assume there is sense somewhere to be found, as if it hadn’t been a copying machine but some other, in this case some uncanny older self, had written it with an intent, despite the mist and chaos of its state of mind. From there, a renewed literary practice can emerge, together with a renewed sense of what ‘constraint’ can mean: given an output, I must make it work, like any other text I or any writer might compose, like the WordSquares or Subwords earlier, despite the woes and hazard of quantity. If I can’t, I must discard the intractable sections, and move on. Just as previously the database was used as an external, realised imagination, the text here, despite its generated nature, is a real, given (imposed, the new constraint) draft of the future work, and it is my task to make it ‘re-enter’ the literary space (assuming the original texts were in fact part of literature, even in a loose sense)2. Just as with WordSquares above, and perhaps more acutely so, the challenge of ‘making it’ (literary, salient, good, etc) remains complete. But unlike the work done with databases, there is a sense of an opening here. Possibilities, and a more fluid, free interplay of constraints and creative impulses. For years I have been looking for ways to escape a tendency toward a kind of writing which is both automatic (frantic) and intimate (shackled to my personal, private circumstances). The interplay with such mechanical process might offer hints toward a way out, as it is possible to work with an external, artificial ‘imagination’ that, suddenly becomes far more flexible and fertile than what I must handle in my formal projects. The network, in the midst of repetitive or banal passages, ‘comes up’ with words, phrases, even what seems to be ‘ideas’, that I project onto the stream of letters offered to my perusal, that my impoverished, battered mind would never have produced on its own, yet are close enough to my usual experiments for me to be ‘fooled’ into recognising it as an extension of myself. Even if that is far remote from a ‘full AI’ text, it is enough to get me going, and induce me to take the text seriously and work on it (what I have been unable to do for all this time with ones ‘of my own’).

I also think of it as some sort of oracle: the meaning of its words are abstruse, they require effort, and, more than that, they require an act of projection on my part. The ambiguous status of this ‘oracular’ monologue is that I am the oracle, and doubly so, by being the source of the texts fed to the network, and by having as my main constraint to produce meaning out of it whatever the cost (by cutting, adding, shuffling, etc). The external mysterious voice acts as a tool to force myself finding, or secreting, both seem quite interchangeable in this context, some form of message, idea, or imagery, that, while remaining irretrievably other, I can claim my own.


  1. A few VI-squares can be found on Recursus. Those were produced using the 20k most frequent words in Google, and do not contain diagonals. My program only found a few dozens, and it has been a recurrent pattern, given any of the lists I used (and also in earlier experiments like Wordlaces), that combinatorial results tend to drop massively above 5 letters, often to nothing. I was unable to find any square above 6 letters so far (and my only hope in finding some would be to improve either the speed of my algorigthm, perhaps integrating C code with Cython, or gain access to a multicore supercomputer). 

  2. As any data scientist knows, when it comes to machine learning data matters almost as much as the actual architecture used. The study of ‘data curation’ is still very new to me, and will be keeping me busy, I’m sure, in future years. For the pieces presented here, I used a yet to be unearthed bulk of personal texts written since 2012 entitled ‘it’, and divided into many parts (they can be found in these folders). One of my coming project is to build a website for them, to ensure an easier access to them, and work toward publication.