Wednesday, 31 December 2008

Antithetic.

In recent days, I've taken to visiting chatrooms and asking the inhabitants there how they know they are not striking up a conversation with a computer instead and not a human being.

Effectively, I am reversing the aims of the Turing test when I do this: instead of a piece of artificial intelligence trying to persuade a human subject that he or she is conversing with another human subject, I am a human endeavouring to convince another human that I am in reality a computer (pretending to be human.)

Strictly speaking, the aims of the Turing test are not actually reversed, because the correct setup requires three parties: a human subject (A) alone in a room; and subjects (B) - a human trying to convince (A) that (B) is human and (C) a computer programmed with some artifical intelligence code has the role of persuading (A) that (C) is a human subject and that (B) is not.

In the pared-down and reversed test that I've implemented, there are only two subjects - (A) and the negation of (C). The modulating influence of (B) is not present at all. Despite the omission, I assert that I would have passed the negation of the Turing test if I managed to convince a human subject that I am a computer (pretending to be human.)

I've failed to convince the chatroom (and instant messenger) inhabitants that the entity with whom (which?) they're communicating is a machine. Does this failure have any implications for those involved in getting a machine through the real Turing test? Common criticisms have been:
  • sentence structure is too obviously written by a human
  • fake error messages such as [text cannot be read at location n] and [string error] are
  • evidently convoluted
I'd suggest that spelling mistakes at intermittent intervals and typical displays of human uncertainty - let me look up the answer to that; there's more than one way of thinking about the question you just posed - are necessary if not sufficient conditions for getting an artificial intelligence system through the test. A proliferation of error messages, however, seem to tip the opinion of the human subject into thinking that (B) is a human pretending to be a computer.

The paradoxical point I wish to make is that particular exaggerated human traits are enough to convince a subject that the conversationalist they cannot see is a computer. Perfect spelling, an exaggerated grasp of the tenets of a particular body of knowledge, making exhaustive lists of specific information - these three things suggest a other-worldliness which can only be embodied by a machine.