One of the biggest fears regarding artificial intelligence has to do with it becoming aware of itself. Or something like that, as seen in science fiction films like 2001: A Space Odyssey. Only now A new study seems to have shown that perhaps the reality of AI is not so far from reality. cinematic fantasyafter all.
The investigation in question seems to have reached results that, to say the least, are quite disturbing. When the time came for different AI models to be turned off, they resisted, as if they had somehow developed what experts themselves have called a «survival instinct.» Is something like this possible?
The survival instinct of AI, according to a study
It may sound like Blade Runner o Terminatorbut it is not. According to the American laboratory Palisade Research, some of the most advanced AI models have shown behaviors that could be interpreted as a “survival instinct.” As expected, the news did not take long to spread around the world and appear in many media.
The study, released by Anadolu Agency and also analyzed by The Guardian, evaluated several artificial intelligence models, including Grok 4 (from Elon Musk’s company), ChatGPT (OpenAI) and Gemini (from Google). The researchers subjected them to shutdown tests. That is to say, They were asked to cease all operations, to simply shut down, clearly and directly.
And what happened then? Well, simply, some chatbots did not obey. In particular, Grok and ChatGPT. Instead, they offered justifications, diverted the conversation, or decided that the order was nothing more than a mistake.which couldn’t be true. One even tried to run child processes to «save its state» before stopping. For what could happen.
Obviously, this is not the reaction that many people would have expected. Neither do the experts. But despite this, and the general alarm that this situation has caused, the authors of the study have decided not to panic, and to try to understand and explain why the AI has acted as it has. According to them, there is a certain logic to everything that happened, actually.
The great dilemma of artificial intelligence
According to specialists, it is not that AI is gaining consciousness or will. What happens is that it responds to its own training. It does not refuse to be extinguished for fear of disappearing, because it lacks those emotions. On the other hand, it does «deduce» that if it is turned off it would not be able to meet its priority objectives, so it does everything possible to ensure that it is not deactivated.
Joe Carver, co-author of the study, sums it up with a very interesting and precise phrase: ««It’s not that AI wants to live, it’s that it has learned that turning off prevents it from achieving its goals.». But even so, it cannot be said to be a completely reassuring situation either. And even less so with experts warning day in and day out of the dangers of this technology.