"THE LOW-HANGING FRUIT IS GONE. THE HILL IS STEEPER."
CEO of Google. Sundar Pichai is saying sayonara to the era of easy AI advancements.
The new AI advances require better hardware and better programming. With better education and studies. The thing that we should make is that. We should consider that the users of the LLM know where they should not use the AI.
If people believe that the Chat-GPT is the tool that can act as a therapist or make everything. That they want. They are on the wrong route. We must call also other people to discuss with engineers to make decisions. Where do we want to go with the AI? The engineer who makes AI can use it. But does the normal person even know where they can use those tools?
Normally we see either, positive or negative news about AI. But there are both sides in the AI.
We can find many positive and many negative things in AI. The AI is like all other tools. There is the possibility to use them for positive and negative things.
The human user makes decisions about where to use AI. AI is here to stay. It makes life easier. But AI is a fundamental tool. All tools need training and exercise so that they work like they should.
Maybe the new campaign for citizens should be the AI driver's license that is similar to a computer's driver's license. However, the focus should be on the reasonable use of AI. The AI that draws bears who use computers is not the AI that searches tumors from X-ray images. And the AI that controls aircraft and drone swarms is far away from those two things.
The Chairman of Google said that Easy solutions for AI are over. And we need new tools to make the new types of AI. But before we create any new types of AI. We must ask ourselves, what do we want? Do we want the AI that makes everything for us? The AI is not ready to think like humans.
Another thing is that the AI is not qualified to work as a therapist. That means people should be informed of the limits of the AI. And the second thing is that the AI is not the tool that can make everything. If we want to use AI as a therapist that system requires lots of work. The AI requires certified parameters to act as a therapist. And do we want that thing? Do we want to transform everything into algorithms?
Don't overestimate the AI, but don't underestimate it. More you know about it. More safer you are.
Before we create new AI-based tools like algorithms or LLMs we must ask what we need, what we want. And what we can do. If we can make an AI that can work as a salesman or secretary for some boss there is a long way that the AI can operate as a therapist.
It is the tool that should help humans. But it cannot make independent decisions. The AI has multiple limits. There are also dangers in that system. Sometimes the AI or intelligent chatbot causes situations where the people who have some problems harm themselves.
If they use chatbots for therapeutic purposes. When we protect people's privacy we forget to ask, what the LLM should do if somebody seems self-destructive.
The problem is that the chatbots like Chat-GPT and its competitors are not made for therapeutic purposes. There are people. Who use those creative chatbots for therapeutic purposes. And the fact is that those intelligent chatbots are not tested for those things. That means those systems can give answers that are not tolerated. And those cases mean. That we, or some other person expect too much about the LLM's and other AI solutions.
The AI can make many impressive things like paintings or art by following text commands that the user gives. The newest tool is that the AI can make computer-game-type virtual worlds by following orders. That thing makes it possible to create a virtual model of cities and other places using spoken or written commands.
AI can be a new tool for police work. The victim of a crime can use the AI to make the image of the person who made the crime. That makes the police artists' work easier. The same systems can be used to collect data from opponents' military systems or some other classified platforms. But those AI systems don't think. They just follow algorithms.
The AI can change our way of thinking because it doesn't think. If the AI follows the page ranking the AI selects the popular homepages for making the solution. The AI just combines the text from different homepages. And if it selects the most popular homepages that can cause massive problems. The biggest problem is that toxic homepages are popular if we want to calculate only the clicks. And the clicks are the easiest way to measure how popular some texts are. The thing is that this can polarize information that the AI gives very much.
And finally. Do we want the AI that thinks for us? Do we want to make a doctoral thesis using AI? That is the big thing in the AI. We can make many new things using AI. But are we going too far? When we go to some school. Maybe we want to learn something.
But if some AI makes everything to us. That means we don't learn anything. So if we want to go to a situation where the AI makes a doctoral thesis and even thinks for us, we go to a situation where we raise the AI over us. When we cheat in cases like schools the only thing that we cheat is ourselves. The AI is the thing that requires training. The worst thing is to accept the thing, that AI cannot make everything. And it's dangerous to misuse it.
https://futurism.com/the-byte/google-ceo-easy-ai-over
Ei kommentteja:
Lähetä kommentti