It is not your thing: chatgpt alucin much more than before, and neither OpenAi knows the reason

Foto del autor

By Jack Ferson

It is not something new: the hallucinations of the AI, and in particular of Chatgpt, they have been there since Openai began to take this technology to most users. Basically, This is those occasions when artificial intelligence It is confused and give wrong answersalthough usually with a conviction that suggests otherwise.

The most striking thing is that, as experts seem to prove and it even gives the impression of having recognized the OpenAi company itself, the new versions of ChatgPT hallucinate much more than before. It is paradoxical that, how much better they are, perhaps less reliable they are returning. At least, that is the address in which different studies point out.

Increasingly alucine chatgpt

It is true that it usually speaks of Chatgpt’s hallucinations with some mystique. In fact, the term «hallucination of AI» suggests a kind of reverie that, in reality, is not even close to the reality of this technology. Simply, these are false or invented responses, it can be assumed that because the sources on which it is based are wrong.

Even so, it is peculiar that the most recent chatgpt models carried out OpenAi, such as O3 and O4-mini, have been showing much higher hallucinations rates than their predecessors. That is, they get more confused. And what does Openai say? Their reports admit that it is true, but that they do not really understand why this negative trend occurs.

Although a priori it might seem something without too much importance, in reality, it has it. Not because of the reliability or not of what Chatgtp says or stops saying, but because it should be remembered that some of their latest models were born precisely for that: to be more precise and make less wrong. If it is fair or contrary, then how can your reason for being justify?

But the data is there, and seems to try it. According to an investigation presented by Techcrunch, the O3 and O4-MINI models, designed to improve reasoning, present many more hallucinations than previous versions such as O1 and GPT-4O. Specifically, O3 presented a 33%hallucination rate, while O4-mini reached a worrying 48%. They are overwhelming figures.

Openai does not know why his artificial intelligence makes more and more mistakes

Openai has recognized that his AI makes more mistakes with each new model he presents, and it can be assumed that admitting it is a first step. Nevertheless, The company led by Sam Altman seems far from being able to offer a solution in this regard. So far, the only response published by its developers talks about having to investigate more to understand it.

Or perhaps the term understanding is not the most appropriate in this case, and it will be more successful to talk about trying to discover where the failure is, which after all is what it is about. What is clear is that Openai has a whole challenge ahead and, since I do not solve it, it may end up spending an invoice. It will be for alternatives …

Know How we work in NoticiasVE.

Tags: Artificial intelligence

Deja un comentario