Think, you are a customer of a tech company and suddenly you get a mail that now you can run that service on only one computer, whereas there was no such rule earlier. Angry you cancel the subscription and then it is known that all this was the fault of an AI boat!
Yes, as Artificial Intelligence (AI) is getting faster, its mistakes are increasing equally. This disturbance is called hallucination in technical language. That is, when AI creates something by itself without any concrete reason.
AI made fake rules, people angry
Recently one such incident happened in a programming tool company named Cursor. His AI support boat told some customers that now Cursor can be used only on one computer. People got so angry with this information that some canceled their account.
Later the company’s CEO Michael Truel clarified on Reddit. He said, ‘We have no such policy. This was the wrong response to the frontline AI boat. ‘
AI is improving, but distance from truth is
Today Chatgpt, Google Gemini and other AI tools are helping in many tasks. Such as writing code, making emails, preparing reports and answering questions. But a big problem is that these systems sometimes tell the facts wrong or make information without any source.
Investigation of some new AI models found that they can give incorrect information up to 79%. In a report of Indian Express & nbsp; AI expert Amr Avdallah says, ‘No matter how much we try, AI will always make a lot of mistakes. They will never stop completely. ‘
where is wrong information dangerous?
If AI makes a mistake in the recommendation of a recommendation or film, then there may not be any difference. But if the same mistake is made in the court case, medical report or business data, then the loss can be big.
AI present in search engines like Google or Bing also give many such answers which are either completely wrong or there is no real source behind them. For example, if you ask which marathon on the West Coast is good, then in response Philadelphia race can tell while it is in the East Coast.
probe is also necessary before trust
AI is making our life easy, but blind confidence will not work. Especially when it comes to medical, legal or sensitive data, then human brain testing is still most important. Until AI does not understand what is the truth and what is the lie, ‘Caution is the biggest solution.