The AI's problem is that it doesn't think. That is the thing that we usually repeat and repeat. That ability to get data without analyzing it causes bias. The thing that makes that model possible is that the AI doesn't know what the homepage involves.
It sees words and their connections to something, but then the system doesn't realize what those words really mean. The AI, or large language model, LLM is programmed to give answers in a certain time. That means the LLM doesn't compare and search connections to every single word in the document. That saves time, but it decreases the data's trustworthiness.
*********************************************
The study put ChatGPT through 18 different bias tests. The results?
AI falls into human decision traps – ChatGPT showed biases like overconfidence or ambiguity aversion, and conjunction fallacy (aka as the “Linda problem”), in nearly half the tests.
AI is great at math, but struggles with judgment calls – It excels at logical and probability-based problems but stumbles when decisions require subjective reasoning.
Bias isn’t going away – Although the newer GPT-4 model is more analytically accurate than its predecessor, it sometimes displayed stronger biases in judgment-based tasks.
The study found that ChatGPT tends to:
Play it safe – AI avoids risk, even when riskier choices might yield better results.
Overestimate itself – ChatGPT assumes it’s more accurate than it really is.
Seek confirmation – AI favors information that supports existing assumptions, rather than challenging them.
Avoid ambiguity – AI prefers alternatives with more certain information and less ambiguity.
(ScitechDaily, More Like Us Than We Realize: ChatGPT Gets Caught Thinking Like a Human)
*********************************************
The fact is that at least part of the results that the AI gets from the net are based on the search engine's page ranks. When we think like this: we teach AI we should give feedback on its results.
If we tell that the AI gives bad answers that helps it to adjust the results that it uses. The problem is always that for giving trustworthy feedback to the AI we should sometimes be experts. Of the thing that we asked.
The big problem with AI and other copilots is that those things are made for customers. The product must also please the user in some way. And that can cause the bias. When somebody writes about AI, that person should compare it with other LLMs. The most advanced AI "thinks" that it's the best of all. The algorithms tell it how impressive it is. The more advanced AI means that it's better than other AI's.
When we think about AI and its operators and owners in the world we must realize that all those companies that develop AI are independent companies. That means those AIs don't have access to the competitor's statistics. So, those AIs are not under one dome. And if they should compare themselves with other AIs like "What is the best AI in the world"?
Those systems cannot get trusted data. Like how many percent of their answers satisfy the users. That kind of data is not collected. The AI will not ask would the user like its answers or if are they good. Another thing is that if the regular user is not the same as some doctoral user. The AI requires specific commands that involve the right terminology. Without the right terminology, the AI is helpless.
And then AIs cannot compare those percents with other AI's statistics. This makes the AI think. That it's better than it is. The AI can use Wikipedia. When it should use specific sources like CERN homepages. The AI requires that the person who uses it knows the terminology, and then the misuse of some term in the wrong place in a query can turn the AI's text into a grab. AI will not make a difference in the words "hall" and "hall effect". That thing can cause a situation in which the AI starts to talk about echoes or some aircraft hangars when it should start to talk about resistance in electric wires.
https://scitechdaily.com/more-like-us-than-we-realize-chatgpt-gets-caught-thinking-like-a-human/
Ei kommentteja:
Lähetä kommentti