It’s a trouble-making question.
And no, I don’t look at it in that way. But others do — and it is so very tempting to find some way of ruling on what makes a good (or even ‘great’) poem. Michael Dalvean in ‘Ranking Contemporary American Poems’ (thanks to Tim Love for sending the link) claims ‘By using computational linguistics it is possible to objectively identify the characteristics of professional poems and amateur poems’.
What he says sounds perfectly reasonable: ‘Placing poems on a continuum that is based on the extent to which poems possess the craftsmanship of a professional may be a step towards explaining why some poets are “greater” than others’.
Dalvean refers to two previous studies using computational linguistics to crack poetry. The first of these (Forsythe, 2000) compared the features of regularly anthologised poems with ‘obscure’ (un-anthologised) ones. It concluded that:
successful poems had fewer syllables per word in their first lines and were more likely to have an initial line consisting of monosyllables. It was also found that successful poems had a lower number of letters per word, used more common words, and had simpler syntax. Thus, contrary to what we might expect, the more successful poems used simpler language. In essence, poems that use language that is simple and direct are more likely to be reproduced in anthologies.
A second study, ‘Kao and Jurafsky, 2012) compared 100 poems from a reputable anthology with another 100 from (oh dear) www.amateurwriting.com found that ‘professional poets used words that were more concrete’ and the amateurs ‘ more likely to use perfect rhymes . . . more alliteration and more emotional words . . . .’ The ‘professional poets’ also used more words. Period. Not cleverer words – a wider variety of simple ones.
Dalvean has built on these studies but added ‘a broader range of linguistic variables’, namely 68 linguistic variables derived from Linguistic Inquiry and Word Count (Pennebaker, Francis, & Booth, 2001) and 32 psycholinguistic variables from the Paivio, Yuille and Madigan (1968) word norms’.
It gets complicated here (you can read the original paper if you follow the link above). The bit that grabbed me was the idea that there might be an
algorithm that is able to correctly classify poems as professional/amateur with an accuracy of 80% using linguistic variables. There are several applications for such an algorithm. For example, a publisher who needs a quick way of sorting through the voluminous submissions received on a weekly basis could first select a filtered list by running poems though such an algorithm.
Yessss! Though not yet July (my reading window) the early can’t-waiters have begun to trickle through the box. Is this the answer? There is a machine to put the poems through. It might be possible not to read them at all, but just to process them for value, like holding a £20 note up to the light to check it’s not a forgery.
Here is the link http://www.poetryassessor.com/poetry/. Go here to test your own poems. Alas, I put some of mine through the mangle (of course). Most of them were horribly amateur but yours might fare better.
Meanwhile, back to peeling (see below). Others peel after sitting outside in the sunshine. I peel inside (peeling stamps off envelopes) ready for an onslaught of poems in July, some of which will forget to include SAEs. The Royal Mail continues to assist, though not on purpose. . . .
Next Saturday’s NAWE event at CCA in Glasgow promises to help poets get onto the ‘professional’ spectrum, though in a more strategic manner. I’m not sure whether it’s fully booked yet, but if in Scotland, worth a look. I will be there.