I came across this image on Facebook: Anagram

I was struck by the unusual surnames (“Bok” and “Lexier”), and doubly suspicious about the “x” in the surname. The premise of this image is that the left text was created independently, and then the text on the right was created from it, using awesome anagram skills. This image would be a lot less impressive if the two paragraphs were crafted at the same time, so that they were anagrams of one another. So I wondered: is there a way to figure out if this is real or fake?

To Wikipedia!

A simple way to determine if a corpus of text is really “natural” is to run a letter frequency analysis. Wikipedia has an article about letter frequency, and they include a data table of relative frequencies of letters in the English language. As the Wikipedia article states, although there are many different studies done on this and the tables all differ slightly, the general structure is the same. So, let’s analyze this image with a little Python. First, we’ll define the strings, and check out that they really do have the same number of letters, etc.

In [1]: A = ("this text and the one beside it are equal. i wrote this one first, "
    "and then i gave it to my friend christian bok and asked him to "
    "generate a new text using every letter and every punctuation mark "
    "that i used in mine. the other text is his.")
 
In [2]: B = ("micah lexier requested in advance that i reinvent his text. so i "
    "unknotted it and reknitted it into this very form, but then i began "
    "to think that his message had already resewn a touted art of "
    "genuine poetry. his eerie text was mine.")
 
In [3]: from collections import Counter
acount = Counter(A)
bcount = Counter(B)

 

Bash

OK, let’s look for differences. By using Python’s built-in set(), it’s a one-liner to see if we’re missing any letters:

In [4]: set(acount.keys()).symmetric_difference(set(bcount.keys()))
 
Out[4]: set([])

 

Bash

OK, looks good so far – true to their claim, all letters (including punctuation and spaces) are the same between the two. Now let’s look for any letter counts that differ.

In [5]: for ch in sorted(acount.keys()):
    if bcount[ch] != acount[ch]:
        print "Difference: A has %d occurrences of '%s', while B has %d" % (acount[ch], ch, bcount[ch])
 
Difference: A has 48 occurrences of ' ', while B has 43

 

Bash

Interesting! There are a different number of spaces. But the claim of the text holds true – space is not a punctuation mark. Now let’s dig into the letter counts. We’ll first list the characters (stripping out punctuation and spaces) in decreasing order of frequency, just for a quick sanity check:

In [6]: letter_order = [ch for (ch,count) in sorted(acount.items(), key=lambda t:t[1]) if ch not in ' ,.'] [::-1]
 
In [7]: print " ".join(letter_order)
e t i n a r s h o d u m x y v k g w l f b c p q

 

Bash

OK, what’s the natural letter frequency in English? Let’s define a dict based on the table in the Wikipedia entry.

In [8]: english_freq = dict(a=8.167, b=1.492, c=2.78, d=4.25,
e=12.7, f=2.23, g=2.015, h=6.094, i=6.966, j=0.153, k=0.772, l=4.025, m=2.406,
n=6.749, o=7.507, p=1.929, q=0.095, r=5.987, s=6.327, t=9.056, u=2.758, v=0.978,
w=2.36, x=0.15, y=1.974, z=0.074)
 
In [9]: print " ".join([ch for (ch,count) in sorted(english_freq.items(), key=lambda t:t[1])][::-1])
e t a o i n s h r d l c u m w f g y p b v k j x q z

 

Bash

Well, at first glance, this kind of lines up. A few things look out of whack, though. X has moved way up in the ordering, and O and L both have shifted significantly down.

Enough Guessing, Plot All The Things!

But let’s get real with this – let’s just plot the frequencies side by side. Time to bust out some Python data tools!

In [10]: import numpy as np
 
In [11]: counts = np.array([acount[l] for l in letter_order])
 
In [12]: frequency = counts / float(sum(counts))   # The frequency in our test corpus
 
In [13]: nat_frequency = np.array([english_freq[l] for l in letter_order]) / 100.0
 
In [14]: dummyindex = np.arange(len(letter_order))
 
In [15]: width = 0.35  # width of the bars in the bar plot
 
In [27]: from matplotlib import pyplot as plt
r1 = plt.bar(dummyindex, frequency, width, color="#7fc97f")
r2 = plt.bar(dummyindex+width, nat_frequency, width, color="#fdc086")
plt.xticks(dummyindex+width, letter_order)
plt.legend( (r1, r2), ("Text in photo", "English Language"))
plt.title("Relative frequency")
plt.ylabel("Frequency")
fig=plt.gcf(); fig.set_size_inches(10,6)

 

Bash

Anagram Relative Frequency

Very interesting – we can immediately see that O, X, and L are way out of whack with the natural distribution of English letters. It looks like C, P, and maybe T are also not quite right. Let’s compute the differences and plot them, so we can easily find the most curious letters. To do this concisely, we’ll use Numpy and a little fancy indexing.

In [29]: rawdeltas = np.array(frequency) - np.array(nat_frequency)
 
In [30]: sorting = np.argsort(rawdeltas)[::-1]
 
In [31]: deltas = rawdeltas[sorting]
 
In [32]: delta_letters = np.array(letter_order)[sorting]
 
In [33]: plt.bar(dummyindex, deltas, width, color="gray")
plt.xticks(dummyindex, delta_letters)
plt.title("Text Frequency - Natural Frequency")
fig=plt.gcf(); fig.set_size_inches(10,6)

 

Bash

Anagram Text Frequency - Natural Frequency Chart

So we can clearly see where the text in the image most deviates from a normal distribution of letters: it has quite a few more occurrences of T, I, E, and quite a few less of O and L.

Conclusion

The text corpus only has 187 letters, and so it’s a pretty small sample. It’s very possible that the text naturally deviates from the natural English distribution. However, the significant differences here – especially in some of the most common letters (T, L, E) – is enough to raise doubt in my mind.

Disagree? Want to look at it a different way? This blog post is also available as an IPython Notebook shared via our cloud-based Python-in-the-Browser app, Wakari. Wakari lets you easily run Python 2.6 – 3.3, with Numpy, Scipy, Matplotlib, pandas, and IPython Notebook, all right from your browser. Sign up for the free beta today!

By publishing and sharing this IPython Notebook with Wakari, Trent Oliphant, another Continuum developer, was able to perform his own analysis to see if my conclusions on “natural” English distribution would hold true for other texts. His analysis is at the bottom of this shared IPython Notebook.


About the Author

Peter Wang

Chief Technology Officer & Co-Founder

Peter Wang has been developing commercial scientific computing and visualization software for over 15 years. He has extensive experience in software design and development across a broad range of areas, including 3D graphics, geophysics, la …

Read more

Join the Disucssion