The Open AI's new Chat GPT tool beats the company's main researcher.
Artificial intelligence shows its power in a new form. Open AI's new complex large language model beat the company's head researcher, making the system more powerful and complicated than ever before. When we talk about LLMs and artificial intelligence, we face the fact that AI boosts its development.
The AI might not produce new things like mathematical formulas. However, AI can collect existing data and sort it into new orders. So that means the AI can apply the data that researchers collected into databases.
The AI is the tool. That can observe things. In new interactive ways. And because the AI can search data all night long. And it will never be tired. That gives the AI an ability that humans miss. It can search for things like radio waves and the net. Without the need to rest.
Without breaks which makes the AI system more powerful than other systems. Another thing is that. The AI is a tool. That can handle massive data flows from different sources.
However, some researchers asking. Do we already have the artificial general intelligence, AGI? That is a more complicated question than we might even dare to say. The AGI is the AI or substance that can use everything. And then we face another thing.
The AI can make things only when we give orders. Or it can make things that are pre-ordered into it. But then we can face the thing called fatal error. What if the AI equates some things like common intermediate recording or backup data to necessary things? That it must do.
The idea is that the AI can have an internal error detection and fixing algorithm. So what if that algorithm detects that the AI doesn't back up the data if that thing is pre-programmed into other computer programs? The algorithm can include the ability to LLM without asking the programmer's permission. If its mission is to detect and fix simple errors.
There is no general person, GP, or general genius GG. All people have limits in their knowledge and abilities. And that means we might not have the ability or will to test the limits of AI.
AI has also its limits. And the main limit is the access to physical robots. The AI can turn physical only if it interacts with physical machines. And that makes the AI complicated.
The AI or large language model can drive a car if it has access to the car's central computer. And the car has an advanced autopilot. Or the LLM has access to search that program from the net and transfer the code to the self-driving car. Or the AI can use humanoid robots as drivers.
The AI learns incredibly fast. The system just connects the database that involves the operational orders into it. So the car's autopilot programs are modules. That the LLM, or AI can connect to itself. And every single module is a new thing. That the LLM or AI learns.
But can the AI create another AI? The fact is that the AI is substance. The existence of a substance depends on the substance itself. Says Spinoza. But the substance cannot create itself. For that thing substance requires other substances.
When a substance called AI creates a new substance or new AI it just connects existing small substances together like a puzzle. Every piece of the puzzle is the independent language model. And the substance that creates new AI just connects those pieces into a larger entirety. That is one way to think about the AI.
The AI can copy itself into other computers. So that means the substance can create subsubstance. The AI can search for things like how to protect itself against things like machine failure. That means the AI can slip into other computers than it should be. And those things can cause surprises.
https://www.nytimes.com/2024/12/20/technology/openai-new-ai-math-science.html
Kommentit
Lähetä kommentti