Luke 19:40 & Zipf’s Law

I have a chat bot installed in my home page. The following is the log of a chat session I had with it:

# Human: Hi Treasa
# Treasa: Hi there . Are you a student?
# Human: No, but do you want me to be?
# Treasa: What is your profession? Interesting. No , maybe later.
# Human: I am a programmer
# Treasa: What language?
# Human: Java
# Treasa: Die neueste Version von Java bekommst Du bei java.sun.com.
# Human: I know that, but anyway thanks for the information
# Treasa: Who told you that?
# Human: Actually I read it on Slashdot
# Treasa: Actually you don’t say. I don’t think I’ve read that. Can you give me a reference?
# Human: http://slashdot.org/~ScuttleMonkey/
# Treasa: Thanks for that web address.
# Human: You sound more human today
# Treasa: Ah. In what way do I sound like that?
# Human: You respond properly to my questions
# Treasa: Interesting gossip: said I respond properly to his or her questions.
# Human: Haha
# Treasa: I am so glad you find this amusing, .
# Human: I have to go
# Treasa: See you later, . Nice chatting with you.
# Human: Bye
# Treasa: See you later .

It sounds semi-intelligent sometimes. Actually, it is based on another AI by the name ALICE which is a 2 time winner of the prestigious Loebner Prize which is awarded to the most human software.

How does it work?
Now it may seem impossible, but human language is not random. Considering the vast size of the set of things people could possibly say, that are grammatically correct or semantically meaningful, the number of things people actually do say is surprisingly small. This is a derived conclusion from Zipf’s Law

Steven Pinker, in his book How the Mind Works wrote that

Say you have ten choices for the first word to begin a sentence, ten choices for the second word (yielding 100 two-word beginnings), ten choices for the third word (yielding a thousand three-word beginnings), and so in. (Ten is in fact the approximate geometric mean of the number of word choices available at each point in assembling a grammatical and sensible sentence). A little arithmetic shows that the number of sentences of 20 words or less (not an unusual length) is about 10^20.

Fortunately for chat robot programmers, Pinker’s combinatorics are way off. Experiments with ALICE indicate that the number of choices for the “first word” is more than ten, but it is only about two thousand. Specifically, 1800 words covers 95% of all the first words input to ALICE. The number of choices for the second word is only about two. To be sure, there are some first words (“I” and “You” for example) that have many possible second words, but the overall average is just under two words. The average branching factor decreases with each successive word.

Where does all this lead to?

And he answered and said unto them, I tell you that, if these (his disciples) should hold their peace, the stones would immediately cry out. (Luke 19:40, KJV)

The bible speaks of a possibility that even stones could praise God. I am not saying it will, but just a thought….

Alfred Lanning’s (I, Robot) words still makes me wonder:

When does a perceptual schematic become consciousness? When does a difference engine become the search for truth? When does a personality simulation become the bitter mote… of a soul?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s