In the black box model, the only thing that means is the thing that the outside observer sees. That thing was the primary element in early 20th-century psychology. And the most modern observation tools are changing that thing. But in the early 20th century the brains were totally unknown. There was no change to research living brains. When EEG systems developed researchers could investigate the brain shell electricity.
But things like PET and MEG scanners open new ways to see how brains work. But if we think of brain research as the box model, the early 20th-century models were black box models. The modern scanners brought the glass box or white box model to brain research. And the AI brought the grey box to neurology. That means the brain-computer interfaces, BCI, where researchers use brain electricity to control computers and soon read thoughts.
When developers use the black box method to test programs they simply test that the program works. When we make the black box model for psychological learning methods, we can think about some 9-year-old child. That child can read almost everything and we can put that person to read even complicated scientific articles. The child can read those words but that person doesn’t understand what those things mean. The black box means that the actor can make many things, but the action is the only thing that the outside observer sees. That means that black box applications are not as safe as they could be.
And most of the computer applications are black boxes for users. A regular user sees the interface and that’s enough. In the white box or glass box application test, the tester tests the code. In the grey box, the test unit tests the functionality and code in the case of programming. The fact is that the AIs that we see are black-box applications. We see only the command line and the answer or thing that the AI makes. We don’t actually know even if the images and texts that we order the AI to make are made by some humans.
In the case of the black box, the only thing that matters is that the system gives the right answer. The way the system makes that answer doesn’t matter. The only thing that the user sees means something.
Is AI a black box? The idea of AI intelligence is something that is handled very little. We know that AI can solve many problems independently, but AI doesn’t know what it really makes. The AI can know many things, but it cannot have deep knowledge about those things. The model is taken from the black box psychology. In black box psychology, the key element is that the behavior that an outsider observer sees is something that we can accept. That means the AI just mimics human intelligence. The black box model means that the AI can give the right answers for some mathematical problems.
But the same AI cannot make anything else. That is the idea of the black box. In the black box model, the thing that the AI gives the right answer is enough. The AI must do only things that the operator orders. So when the operator asks about the uranium enrichments and something like that the AI must give the answer. It must not make its own connections to things like weapon development. There can be things that deny the AI to give orders that can cause damage to operators or their environment. But those orders are in the AI’s code. The AI doesn't make those orders itself.
The AI is sliding to the grey box application. For making autonomous learning the AI must have access to its source code. And the second thing is that the AI must have the ability to test the code. The AI can involve three parts. The system that creates the code. Another system that tests and accepts the code. And the last system that keeps the backup copy of the AI source code. A backup is needed for the errors. The system should have the ability to reject the code that is not functional.
Ei kommentteja:
Lähetä kommentti