When we think about large language models (LLM) and artificial general intelligence (AGI), we sometimes forget that AGI is an extended version of the LLM. The LLM can handle any mission. We can imagine that it has the right dataset. Then we face another thing. We sometimes forget that AGI just handles the data that it has. The system connects data into new forms like puzzles. To generate things the AI requires data that it puts in the new order. The biggest difference between AGI and LLM is the scale of questions that they can answer. The system is productive if it has data that it can handle. And that is one of the things that we must realize.
The AI systems are impressive. But they are also computer systems. Those systems have two layers. The "iron" or physical layer. And software layer. The AI can run on separate programs or be integrated into the operating system. Or the AI algorithms can operate on the kernel when the AI software is loaded into microchips. That thing can seem like "iron-based AI". But it is software, like all other AIs.
When we think about AI and its shape.
We must realize. That even the best systems like human brains are useless without information.
The software sorts information like puzzles. We call that process using the name: "thinking". Human has two thinking speeds. The fast and slow. The slow is the analytic and the fast is like reflex.
Computers are useless without programs. Those programs are algorithms that the AI uses to control data. The thing. That makes it hard to make cognitive AI seem simple to solve.
We must make a system that learns like humans. And then mimic that process in the system. When we make a robot that reads a book and then stores that information in its memory we must realize that there are things like program code that the computer can handle quite easily.
It simply sees the code and then compiles it with its data. The system sees the details or attributes that make the database controller search for the right database that involves the right programming language like C++ etc. But when the system must handle abstract or non-certain data there is a problem. When the system learns something by watching movies the system must put the data to match with things that it sees on streets.
The problem is that the computer doesn't think. We can show anything like movies about some circus artists and tell that those people that the computer sees are "boxes". The computer can have details about things like boxes. But then it can connect those people in the database there are boxes. That might seem ridiculous. But that is possible. Same way if the robot gets the order to get the car.
A robot walks to the street and takes the first car. If there is no program. That makes the robot choose only the car that its master owns. The robot doesn't itself make a difference between, a car, lorry, van, or truck. For a robot, they are all cars. So if the robot must get the car to take some sand, it can go to the nearest car and then take it. Then robot can simply put sand on the trunk. The robot must have information about what type of vehicle it needs. For carrying sand.
That means we must also develop programs. That we can create AI that doesn't make surprises. The large language models are quite a new tool. Advancing is fast. But then we must realize that the road to the AGI can be longer than we expect. Or it can become a reality sooner than we expected. And then we must also understand that there is not a single person that can ask all possible questions in the world. The human's knowledge is limited to data. That human is stored. And there is no "General person" who knows everything in the world.
Ei kommentteja:
Lähetä kommentti