Results from present-day instantiations of the Turing test, most notably the annual Loebner Prize competition, have fueled the perception that the test is on the verge of being passed. With this perception comes the misleading implication that computers are nearing human-level intelligence. As currently instantiated, the test encourages an adversarial relationship between contestant and judge. We suggest that the underlying purpose of Turing's test would be better served if the prevailing focus on trickery and deception were replaced by an emphasis on transparency and collaborative interaction. We discuss particular examples from the family of Fluid Concepts architectures, primarily Copycat and Metacat, showing how a modified version of the Turing test (described here as a "modified Feigenbaum test") has served as a useful means for evaluating cognitive-modeling research and how it can suggest future directions for such work.
展开▼