Google reveals how cybercriminals use AI: from ransomware to credential theft

Foto del autor

By Jack Ferson

For most people, artificial intelligence is just a tool for work or fun in their daily lives. There is even a lack of those who already use it to be informed, like a Google search engine. However, the technology giant has once again given voice to all those experts who express their concern about what has to do with cybersecurity.

In an exhaustive study, Google has revealed the main threats with which cybercriminals take advantage of the possibilities of this technology. These range from the most everyday and simple to some that have much more ambitious objectives. And it is not something that will happen in the future, as some specialists predict, but it is already a reality.

The dangers of AI, according to Google

According to a new report from the Google Threat Intelligence Group (GTIG), more and more malicious actors – including cybercriminal groups and state operations – are using AI models to plan, execute and hide cyber attacks. To do this, criminals largely use popular chatbots such as ChatGPT or Gemini, they acknowledge.

In reality, what is striking about the matter is that it is not a specific danger, but rather a whole chain of them: from gathering information and writing phishing emails to creating adaptive malware. The main problem? That traditional antiviruses are easily defeated most of the time. They are simply not ready for AI. At least, for now.

The consequence of all this that Google points out could not be clearer: cybercrime (it is the term that they themselves use in their study), which was previously only accessible to a few hackers, is now within the reach of anyone. «We are seeing a professionalization of cybercrime driven by AI,» they warn. «The barrier to entry is falling, and the level of sophistication is rising.»

And what do the main people responsible for AI tools, such as OpenAI, Microsoft or Google itself, do? Try to avoid it, of course. But theory is one thing and results are quite another. Chatbot filters and security systems not only seem insufficient to deal with the threat, but, according to experts, «they can be compromised in a matter of minutes.»

The states are not far behind either

Another aspect that has drawn the most attention from the research carried out by Google lies in who is maliciously taking advantage of the possibilities of AI. If you think about common cybercriminals, scammers and others, you will have been right, but it doesn’t end there. Specialists also point to groups linked to governments of countries such as North Korea, Iran or China.

Thanks to AI – and always according to Google – some states are already using AI for espionage, network manipulation and theft of strategic information. A problem that seems to be increasing, and for the moment, it seems, has a difficult solution. The bad guys, it seems, are one step ahead.

Deja un comentario