The AI is advancing very fast. And sometimes we must ask, who controls whom? Do the engineers control the development of the AI? Or does the AI control the engineers who develop it? The race of the AI is the visible thing. That means the AI development is very fast. And the thing is that.
The requirement is that the AI must always get new features that turn it better and better. Or is it better? It can make more things and that is a description for better AI. But even if the AI is wider and it can make more things, we must ask does the AI make those things better? But the AI is far away from the thing, That we call the artificial general intelligence, AGI.
The problem with AI advancement is that. The focus is on the calculation capacity. Increasing that capacity makes it possible to run more and more complex algorithms. The iron or physical systems are important. However, the main problem is that AI requires more than just calculation capacity. The AI requires the ability to collect, analyze, and reorder information into the form. That people can trust it.
The problem is that even the deepest learning systems can make many things. But they do not know what they really do. The action that the AI makes is quite easy to explain. When we order the AI to raise its limb the system can make animation when there character raises its limb. Or it can make contact with the robot and the robot raises its limb.
During that process, the AI must recognize the order. Then the system must search if is there a match for those words in actions that targeted the physical system. If there is a match, the AI activates the physical action. If the system must calculate something it can ask if should it write all stages in the calculation or if the answer is enough the AI can say it to the user. In those cases, there is the router that selects does the order requires a physical system or if the screen is enough to introduce the solution.
There is nothing mysterious. The AI works like its creators, programmers are ordered. But the fact is that. The AI will not know what it really does. It means that when we say to the robot, that it must raise its right hand. The robot or its computer selects the components that operate the right arm, and then the robot can raise that limb.
The fact is that if we misconnect those microchips the robot can raise its left leg. That is the key to AI. The AI can make many impressive things. But it just follows the algorithms that determine, how it must react to something. The activator can be the thing that the robot sees or hears. Then the activator launches a trigger that activates the action. The action can be physical or virtual.
But then we must realize one thing. We can make robots that say "ouch" when we step on their feet. They can say that we use bad language. Or they can even cry if they have the right systems. In those cases, the non-polite action activates a trigger that makes the robot look sad. But the fact is that robots don't have feelings at all. They might react and look like humans. But they still are robots. They react to our voices and our words. But they are not intelligent.
We could take those actions by using tape recorders and things like volume meters. If we step on those robots' feet, that activates the button that activates the tape, there is the word "ouch". If we yell too loudly the volume meter activates the tape that says, "Don't yell". That means those systems are not more intelligent than tape recorders. They seem very highly intelligent but they are not that. They just act as the button or trigger and the action that is connected to the trigger ordered.
When the AI makes the diagnosis by using X-ray images the system just compares differences in the X-ray images. The system acts similar way as the fingerprint comparing systems. The system can use multiple X-ray images and search for things that are changed from those images. The thing is the fingerprint recognition systems are helpless if they have no fingerprints that they can compare.
Same way when AI tries to detect stolen social media accounts it searches for differences in the ways how the user uses media and texts. The idea is that it's possible. That many people use those trolling accounts. And all of those people have different things that they like and hate. Differences in the texts and publications tell that it's possible. That there is more than one user in that account.
In social media, we might not even know people who publish things there. We might not know their faces or their identity. But also those avatars have identities. People behind those images and maybe fake identities might have certain types of music genres that they like and follow. We might follow the avatar or fake identity because the person publishes things that we like.
And if those publishings suddenly change the person who is behind that identity might be changed. Things like musical opinions do not change in one night. And people don't turn adult in half a day. Too sudden changes are telling that something is happening on the other side of the publishing.
The small things turn into the big things if a person who we have known for years suddenly changes some core elements in their publishings. The fact is that we don't know who people are on social media. But we know what those people like to publish and what they dislike. We might not know people's names or their faces. But we know what kind of arguments they use, what kind of music opinions they have and we know their taste in entertainment.
Ei kommentteja:
Lähetä kommentti