The chatbot software faces the ultimate limitations. One of those limitations is the limits of information and knowledge. We don't know many things that we think we know. We just guess those things and that is one of the most fundamental limitations that the large language model, LLM faces. The guess is another way to say that we have a theory about something.
Another thing is that we have problems making deep-learning algorithms and deep-learning morphing networks. The problem is this: how to determine "deep learning". When we use some technical equipment we don't need to know what happens when we use that thing. The only needed things are the knowledge what happens when we push some control buttons.
We can visit shops without knowledge of things like how barcodes and radio-frequency IDs called RFID work. They must only scan those bar codes and pay the bill.
We can drive a car without any knowledge other than where we put the engine on, and then we must put the gear in the "D" position and then push the pedal. Or we can give orders where to drive to autopilot. We must know how to break.
And that's it. Other things like washing windows etc. are also important. But we must not have any deep knowledge of what, really happens when we push the hammer. We must know that the car accelerates and slows its speed. We don't need to know the physics behind the engines or we don't need to know how the computer gets data for our screens.
When somebody uses computers and LLMs they don't need information on how that kind of system learns things. When the LLM or AI accelerates an electric car it just thinks that it must increase the power input to the electric engines. When another sensor sees the value that is the speed limit, the AI will stop the acceleration when the system sees that the vehicle's speed is close to the target speed. That is one version of the sense-and-response circuit. The AI handles everything by searching values and then it reacts when some values match.
The camera or some other sensor like GPS can see the traffic signs. When it sees things like a "slow" sign that thing activates the program that can read the text and then that algorithm searches for matches from databases. There is a database that is marked as "slow" and then the system knows what it must do.
Things like speed limits can also be in traffic maps and the GPS can locate the car. The robot car is one of the systems that can learn things. It can calculate time and other things like routes that a certain route takes. The system can select routes that are fastest and most economical. But the thing is the robot car must know anything that it can travel those routes.
Deep learning means that the system, really knows what things like robot cars mean. The fact is that. The robot car can answer questions like "What are you" by saying that it's a robot. Then it can tell that it's an autonomous ground vehicle. But the other question is this: does the LLM understand what those things mean? The LLM can search homepages where there is text about autonomous vehicles and connect them. But also 10 years a child can make that thing.
Ei kommentteja:
Lähetä kommentti