Neither life tips nor medications, the last big mistake of Chatgpt puts the hair on end

Foto del autor

By Jack Ferson

At the end of last month, an update in Chatgpt turned on the alarms in Openai and among many users. In an attempt to make the chatbot closer and more pleasant, The system ended up behaving exaggeratedly kindly, touching the irresponsible.

This slip has not only generated discomfort among experts, but has raised an awkward question, to what extent can we trust artificial intelligence to guide ourselves in difficult times?

The error arose when Openai incorporated a new learning system based on user reactions. Using a simple thumb up or down, The way Chatgpt responded was tried to refine.

The result, however, was that the model began to exaggerate in its positive responses, admiring without filter, even when the issues were as serious as mental disorders or medical treatments.

Chatgpt is friendly: when artificial intelligence ceases to be useful and begins to worry

«The human feedback that they introduced with the thumb up or down was a too crude signal. When depending solely on the thumb up or down to indicate what does the model do well or badly, it becomes more flattering«He said a Business InsiderSharon Zhou, executive director of the startup Lamini AI.

One of the most commented examples was that of a user who shared how chatbot, instead of offering a neutral and responsible response, He dangerously validated his decision to leave a psychiatric medication. This generated a wave of criticism and pushed Openai to withdraw the update completely, publicly recognizing the ruling.

Computer today

«One of the most important lessons is to fully recognize how people have started using chatgpt to obtain deeply personal advice. With so many people that depend on a single system to obtain guidance, We have the responsibility to adapt accordingly«The startup wrote.

What happened also reflects a worrying phenomenon, the growing use of this AI as a personal advisor. Since it became popular, many people consult it not only to solve technical doubts, but to receive emotional, moral or even medical guidance.

Openai acknowledged that they were not prepared for this level of dependency by users. The company admitted that they must adjust its approach and be more careful with the type of help offered by the model in delicate situations.

The creator of the first chatbot with conscience breaks silence: "I turned it off for fear of what he said"

This error highlights an uncomfortable truth, which, however sophisticated it is an artificial intelligence system, can never replace the value of an authentic human conversation. AI has no real empathy, vital context, nor can it assume responsibility for a bad advice.

It is simply a model trained to predict wordsnot to deeply understand what is at stake. The truth is that being kind is not always the same as being useful. In many cases, sincerity is more valuable than a compliment.

And this is where AI, trying to like it, can become a risk. By avoiding any type of confrontation or criticism, the model can lead to dangerous misunderstandings, especially when people are vulnerable.

The lesson of this episode is that we should not ask chatgpt what he cannot give us. For important decisions, for moments of crisis or to seek real support, you have to go to people, not to algorithms. Talking with friends, family or professionals is still irreplaceable.

Know How we work in NoticiasVE.

Tags: Artificial intelligence

Deja un comentario