Is Ai right all the time?

Henrywrites

Well-Known Member
Full GL Member
2,774
2018
1,165
Credits
1,662
Artificial intelligence has broken down the ability to research in simpler ways. Do you think Ai can be seen as very good and can provide the right information, or does it make mistakes sometimes?
 
I think AI can make mistakes sometimes if it gets the answer from a less reliable source like social networks which can have a lot of fake answers. AI can also provide good answers to text questions if it gets the answer from a more reliable source like an Encyclopedia or more trustworthy news website.

I think AI sometimes have problems creating more realistic computer graphics because AI is not as good at drawing human fingers according to tech blog posts I read about AI drawing human fingers.
 
As someone who uses LLMs daily, they are wrong all the time. There's even a term for it. Hallucinating. Doesn't mean they're not useful. It just needs human oversight before being used.
 
they are wrong all the time.
They are not wrong all the time, that's unhelpful.

AI is a great tool but its just a tool and it requires a skilled wielder to use that tool effectively. This is why there is a disclaimer at the bottom of ChatGPT saying, "ChatGPT can make mistakes. Check important info."

Always verify the output, especially if its complex or an 'opinion'.
 
AI can be right if given a verifiable fact. A very good example is coding help. It needs the right inputs to get the right outputs, but it can be right. It is mostly wrong when the input has some errors to it.
 
Absolutely not. I cought him making mistakes plenty of time. Let's say he gives me code that works half of it, then I tweak him and tell him what to edit, then he fixed that but the code I told him to tweak, he makes the same mistake again..
 
We can't expect AI to diagnose issues using a human mind since that's not what it does. It doesn't have the ability to diagnose complex issues, just to pull answers from search engines and interpret the information to fit the question.
 
We can't expect AI to diagnose issues using a human mind since that's not what it does. It doesn't have the ability to diagnose complex issues, just to pull answers from search engines and interpret the information to fit the question.
Yes, but the thing is he is using what other people said on different websites and accepts things to be truth even if they are not. I saw this plenty of times when I asked ChatGPT. He doesn't have any critical thinking because he doesn't think. He just does a research and then put any info he found on the particular subject.
 
Back
Top