The Discovery Institute posted some interesting analyses of ChatGPT (see here and here).
Those analyses showed that ChatGPT is 'more' than just a neural network, and that there are likely humans in the loop.
My operating assumption had been that OpenAI is using humans in the middle, taking the prompts, feeding them to GPT-3/3.5, then copying, pasting, and editing those responses before they go out to the end users.
I have recently revised that assumption. Now, I believe that ChatGPT started in the way I originally assumed, but since then OpenAI has created a second neural network to emulate those 'humans in the loop', using the original 'humans in the loop' as the training set. I think they are continuously augmenting the 'humans in the loop' neural network as a quality control measure. In other words, whenever users give a response a thumbs-down, a human creates a new response (that never goes out to the end user, but is used to retrain the second neural net).
That is all pure conjecture on my part, of course.
With such a vast training set (which has just been dramatically increased with the release of GPT-4), combined with a second 'human in the loop' emulator, I think we are pretty close to OpenAI or one of their competitors passing the Turing test -- being able to fool a majority of the people who interact with it into thinking they are interacting with a real person.
However, I don't think the Turing test comes anywhere close to being a good test of Artificial General Intelligence. It is merely a good test of human gullibility.
The Discovery Institute posted some interesting analyses of ChatGPT (see here and here).
Those analyses showed that ChatGPT is 'more' than just a neural network, and that there are likely humans in the loop.
My operating assumption had been that OpenAI is using humans in the middle, taking the prompts, feeding them to GPT-3/3.5, then copying, pasting, and editing those responses before they go out to the end users.
I have recently revised that assumption. Now, I believe that ChatGPT started in the way I originally assumed, but since then OpenAI has created a second neural network to emulate those 'humans in the loop', using the original 'humans in the loop' as the training set. I think they are continuously augmenting the 'humans in the loop' neural network as a quality control measure. In other words, whenever users give a response a thumbs-down, a human creates a new response (that never goes out to the end user, but is used to retrain the second neural net).
That is all pure conjecture on my part, of course.
With such a vast training set (which has just been dramatically increased with the release of GPT-4), combined with a second 'human in the loop' emulator, I think we are pretty close to OpenAI or one of their competitors passing the Turing test -- being able to fool a majority of the people who interact with it into thinking they are interacting with a real person.
However, I don't think the Turing test comes anywhere close to being a good test of Artificial General Intelligence. It is merely a good test of human gullibility.
View more