a |
A PAS Interpretation of Artificial Intelligence (AI)
Earl Rodd April, 2023 |
This article considers the language AI programs currently (April 2023) very much in the news like ChatGPT and the Microsoft Bing Chat. These instances of AI have certain traits and constraints:
Note that there are other kinds of AI. The one most familiar to people are the AI systems used in self-driving cars. These incorporate visual and other sensory (eg. speed, direction) inputs.
There has been much discussion about the extent to which the language model AIs are "intelligent" followed by speculation about whether AI in general will become so human like that it will make "decisions" that will result in the AI acting apparently autonomously. This article focuses on the question, "Are language model AIs actually intelligent?" This question is addressed by considering the PAS test, the WB(G) which was based on the Wechsler/Bellevue IQ test, the predecessor to the WAIS (Wechsler Adult Intelligence Scales). In this discussion, the details of questions on the various forms of tests is not important (and thus the differences of an up to date WAIS vs. a PAS test), but rather the nature of the subtests.
Therefore, while a language model AI, or even an AI driver robot with auditory/visual input might (and might not) perform very well and appear to be "intelligent", it does not process the "intelligence test" subtests in the same way as a human in that it does not (yet) include any ability to depend on the emotional state of the test taker in the testing environment.
This leaves the question of whether an AI drive robot could be programmed to have a particular PAS profile. While it would be straightforward to program an AI robot to perform the specific PAS subtests in a certain pattern of high/low scores, making a language model AI respond like a human with a certain PAS profile requires a level of "intelligence" not now known in AI. This "intelligence" includes factoring in emotions into responses, and those emotions depend on life history, what has happened in the last hour, the tone of voice of the questioner etc.
Language model AIs are so different in function from the way humans interact with an intelligence test that we must consider them as qualitatively different from humans. They clearly lack many aspects of human intelligence. I believe that this difference will be very difficult to overcome because while we understand how digital computers and programs work, we do not understand how the human brain works when it comes to making decisions, even simple ones like how to answer a question or perform a task on an intelligence test.
This does not mean that humans cannot be fooled by a language model AI into thinking it is human (with an environment constrained to verbal input and output) or that a human cannot be deceived into placing unwarranted trust in the output of a language model AI. This latter matter is problematic because humans decide how much trust to place in a person based on experience which includes more than just verbal responses. Language model AIs do not provide most of the clues human used to estimate trustworthiness.
Finally, while I have touched on aspects of the human soul, emotions and our will, even the PAS does not describe another whole part of who we are as God's Creation - the spirit in man. Human science cannot even teach us how to tell if something is alive or dead. We intuitively know some things are alive, but a precise scientific criteria eludes us. This is all far beyond what digital computers do.
To use ChapGPT, the environment is different from an actual PAS(WB/G) test in that the question is typed rather than read. Some results are shown below to give the flavor of the response of this particular Chat language model AI. The results were done using https://chat-gpt.org/chat
Question | Answer |
Comprehension: Laws necessary | Very long answer which would be scored 2 points |
Comprehension: Taxes | Long discussion which is a 1 point answer. |
Comprehension: Fire in Theater | Long answer starting with the 0 point response (shout fire). The long answer would be scored only a 1. The answer is first qualified with, "As an AI language model, I am not capable of experiencing situations as I don't have a physical presence." |
Similarities: Orange/Banana | 2 points (with many subsidiary similarities) |
Similarities: Fly/Bush | The 2 point answer followed by all the differences. This was preceded by the qualifier, "As an AI language model, I do not have personal perspectives but a possible answer could be:" |
Arithmetic: Airplane | Long explanation of the logic used to get the correct answer. |