MFT: OpenAI Talk to Transformer
by Volodymyr Bilyk
You know it seems like a floodgate of different text generating projects had opened recently. You get a new one almost each week and each time things get a little bit more interesting.
Now it is turn of OpenAI-based endevour.
The tool is called “Talk to Transformer“. It is a natural language processing machine learning neural network designed to perform an autocomplete on the text based on an input query. Its modus operandi is simple – analyze an input text and predicts the most fitting completion sequence.
Long story short: it is interesting but all over the place and not exactly any good (but i like it, because it’s only rock and roll).
Talk to Transformer (bizarrely enough still not abbreviated into Triple T, that’s a serious oversight) was developed by Adam King. The application is based on OpenAI machine learning model GTP-2 that is known for its capability to produce coherent chunks of text that are barely distinguishable from the human-written text output. At least in the case of generic, bland-o, template-o, SEO wet dream text.
So – GTP-2 is really intense when it comes to generating text. But the thing with generating text is that you need to provide a direction for the generative algorithm in order to get results. Talk to Transformer is doing just that.
The idea behind Triple T is cute – to make an easier interface of showcasing the GTP-2 framework in action. It is not as much a tool itself but a glorified interactive advert. It is designed to inspire to use GTP-2. Which is great.
The autocomplete feature is a good way of showing the flexibility of the system. It provides a reasonable amount of interactivity for the users and shows just enough to make a positive impression.
If you give the algorithm one word – it is going to built a text out of it. At all cost. The length of the text depends on the generative potential of the word from the corpus point of view. In addition to that, there are vector equations that add to the picture.
The usual result is a word salad snipped from here and there into a text that is readable but not exactly informative but otherwise borderline entertaining. The reason for that is the limited amount of input. Due to lack of information the algorithm starts to generate text from the text it generates – the phenomenon known as recursion. The contexts mix-up and the whole thing just strolls wherever the eyes are looking at the moment. Because of that you get that sweet subtle nonsense. It is fun to play.
Things get better when you give the algorithm a complete sentence. With more stuff to play with – the algorithm is capable of producing a coherent text. If the sentence’s message is straightforward – you get a couple of vignettes around the central message that bring some colors to the context. If the sentence is ambiguous or heavily relies on tongue-in-cheek meaning displacement – hillarity ensues.
I’ve typed “Kylie Minogue fans don’t masturbate” (known as one of the ways to decipher KMFDM band name) and got some saccharine stream of consciousness heavily reminiscent of teenage op-ed from the good old Tumblr. It confirmed the assumption that Kylie Minogue fans don’t masturbate and then claimed that when they do, they listen to Taylor Swift’s Shake it off (fair statement, tbh). Then the conversation switched to porn in general with references to Wall Street Journal study and then it was cut off mid-sentence on “This is why”.
This kind of structure is called “divagation” – when the subject of the conversation strays away to something somewhat related or even something else entirely. It’s nice to know that NLP generative algorithms are capable of doing that, because jumping from subject to subject within one text was never my strong suit and it is probably good for SEO.
If you give a significant chunk of text – things get weirdly scattershot. At first, i’ve dropped a couple of generic random generated sentences and got nothing really special. It was more of the same that managed to blend into generic text. Then i’ve tried more sophisticated writing and the algorithm more or less adapted itself to the while. It added nothing to the text, but was serviceable enough to make a pass.
Then i thought it would be a good idea to try to autocomplete some short poetry. My usual suspects are well-known.
– Ezra Pound’ In a Station of the Metro delivered the best bad result – “Huh? I don’t get really get it. This is too dense.” Good to know that NLP algorithm can’t dig ye olde Ezra.
– Kenneth Patchen’s “The Murder of Two Men by a Young Kid Wearing Lemon-colored Gloves” fared better. The autocomplete was nice and simple “Now I Know You Love Me.”
– William Carlos Williams’ This is just to say. I’ve fed the whole text and the resulting autocomplete was “!!! So, I guess I’ll be staying here again next time… Thank you so much!!”. Kinda cute.
Finally, i’ve typed some mad lib a strings of random characters into it. The resulting text was an arcane something-something out of the depth of its big bad booty text corpus. Funnily enough, the latter option seems to be the best way of using Talk to Transformer. The resulting text was inhuman slab of characters that persisted on the comprehension but denied contemplation. Here’s an example:
“click click click clicking clicking clicking clicking CLICK CLICK CLICK CLICK CLICK CLICK
Uncomment this line to show
Ear Beard **********
DOO DA DA DA DA DA DA DA DA DA DE DIO DA DA DA
honk hum tinkle tisk tisk tsk tsk tsk p.p. p.p. p.p.
hokum hohum hmm hmph hmmm mmmm hmm mmmm mmmmm mmmmmm mmmm hm hm mmmmmm”
Now this is something i would like to see more.
Overall, I really like what Talk to Transformer has to offer. It is a very good tool for tongue-in-cheek inserts and inhuman machine texts.