Gnarly Posthuman Conversations:

John Ashbery, W. H. Auden, Wallace Stevens, and GPT-2

|Conversations| |Transformative Reading Interface|

Technical Background:

GPT-2 (General Pretrained Transformer) is known as an attention based neural network. It operates by predicting what the next word in a sequence of words is based on all the words that came before it in that sequence. While the base model was trained on 20 million web pages, its attention based architecture makes it especially good at transfer-learning, which allows for fine-tuning the large model on a much smaller corpus of text, creating a new model that leverages the lower-level features learned by the base model (things such as likely word order, apparent grammar, etc.) while taking on the style and content of the fine-tuning corpus. Other neural-networks, like traditional recurrent neural networks can be used to generate text, but can also be used to classify it, i.e. identify if a text was written by a certain author once trained on a subset of that author’s corpus. These experiments use both of these types of systems to create “hyper-carving” text generation pipelines.



Conceptual Background:

In a 1980 interview with David Remnick, John Ashbery describes the formative impact that the poetry of W. H. Auden had on his writing: “I am usually linked to Wallace Stevens, but it seems to me Auden played a greater role. He was the first modern poet I was able to read with pleasure…” In another interview, Ashbery identifies Auden as “one of the writers who most formed my language as a poet.” For Auden’s part, there was a mutual yet mysterious appreciation for the younger poet’s work; Auden awarded Ashbery the Younger Yale Poets prize for his poem Some Trees, with the caveat: “...that he had not understood a word of it.”



Methodology:

Playfully building off this narrative I devised a text generation pipeline that involved fine-tuning GPT-2 models on the poetry of John Ashbery and W. H. Auden. These two subsequently produced models then are put into conversation with each other, with the output of the Ashbery model serving as an input prompt to the Auden model and vice versa in a feedback loop.

The generated text is then classified with three separate recurrent neural networks, one trained one the poetry of Ashbery, one trained on the poetry of Auden, and one trained on the poetry of Wallace Stevens. This classification part of the pipeline serves to filter the large amount of generated text into smaller subsets that have a higher chance of displaying some level of novelty.



Outputs:

The outputs of these experiments are expressed in two different ways: "conversations" and a "transformative reading interface."

In the "conversations" mode you can view the iterative process of the pipeline in its entirety (2500 prompts and responses in the order they occurred) or filtered sets of conversations that are constructed by compiling lists of generated poems that where classified by the backend RNN as an Ashbery poem (green), an Auden poem (blue), and a Wallace poem (red)

The "transformative reading interface" mode gives the opportunity to compare the GPT-2 generated text with the original corpora. Through developing a dense series of links between the generated texts and the original corpus, a playful exploration of blends of the two author's works can be explored while simultaneously examining the materiality and behavior of the language model.



For more details see my working paper, Transformative Reading and Writing Synthetic Archives with Language Models