The AI ​​already writes more than half of the code and humans do not supply: the bugs begin to accumulate

Foto del autor

By Jack Ferson

Artificial intelligence has ceased to be a secondary technology in software development, in fact, in many companies, It already generates more than 50 % of the code that ends in production. The worrying thing is that this code, in too many cases, is published without anyone having reviewed it.

It should be noted that the speed with which the IA tools produce new programming lines has overcome human supervision capacity. And that is already causing consequences, with errors, vulnerabilities and software quality begins to resent.

What until recently seemed a prediction of the future is already normalized routine in development environments, since 42 % of programmers who use chatbots affirm that At least half of its code is automatically generated by this technology.

Some even recognize that 100 % of what they write comes from tools such as Github Copilot, Chatgpt or other solutions based on language models. These types of tools have been integrated into the workflow to the point that they are no longer used as a specific help, but as a fundamental part of the creation process.

Lines of code without human supervision: a problem that can be very expensive

Experts warn that Automation has a cost that few organizations are facing seriousnesssince the AI ​​produces more code, humans are surpassed to review it.

According to the Cloudsmith report, a third of the developers admits that it does not manually examine the content generated by artificial intelligence before implementing it, and it is important to mention that it is not just about lack of attention or negligence.

Lines of code without human supervision: a problem that can be very expensive

AI tools/Cloudsmith

The reality is that the volume of production is such that There is no enough time or resources to audit everythingsince it is too much and are not supplied. The result is a new assembly chain in which the important thing is to deliver fast, even if that involves ignoring the risk.

As this unpaid code is incorporated into the real -use software, errors begin to multiply. From minor failures to severe vulnerabilities, bugs make their way without anyone detecting them in time. But it is not only about involuntary errors.

It should be noted that AI can also introduce false dependencies, suggest malicious packages or replicate defective patterns that you have learned from other public fragments. If that code is published, the problem is moved directly to the end userwhich uses a less safe app without knowing it.

Are we at a turning point?

The cloudsmith company has put on the table a serious warning, where it states that software development has reached a critical point. The generative AI and all the chatbots used by companies has ceased to be a complement and has become a structural component.

However, the mechanisms that guarantee confidence in the code – such as the controls of origin, artifact management or traceability – remain anchored in a model designed for manual environments. Meanwhile, the AI ​​advances with a speed that these models can no longer follow. The gap grows every day, and with him, the risks.

The report itself proposes a clear exit, which is to automate security with the same intensity with which the code generation has been automated. Companies must implement policies that automatically detect unvisable codeidentify their origin, evaluate the risks and block the insecure implementations.

Traceability must be an essential part of the process from the beginning, since the human review remains very important, but it can no longer be the only barrier. If IA generates software, automatic mechanisms are also needed that guarantee its quality.

Fracted programmer

Technologies such as Chatgpt, Gemini or Copilot are transforming programming, since it makes it faster, more accessible and more efficient. But also more vulnerable, and the problem is not in technology, but in the lack of adaptation.

If development continues to progress without proportional controls, The risk of errors, security failures, as well as unstable behaviors will be increasing. It is not about going back, but accompanying this technological leap with a new way of understanding security.

Know How we work in NoticiasVE.

Tags: Artificial intelligence

Deja un comentario