Max Tegmark, from the Massachusetts Institute of Technology (MIT), and Yoshua Bengio, from the University of Montreal, have warned about the danger of building systems that are too advanced. artificial intelligence general could become a serious problem if adequate control is not established over its development. A warning that comes from two of the world’s leading AI experts.
The accelerated growth of artificial intelligence has led to the creation of systems called «AI agents», designed to act with greater autonomy and make decisions without direct human intervention.
Leading companies in the technology sector promote this evolution with the promise of improving productivity and making everyday life easier. However, Bengio warns, on CNBC’s Beyond The Valley podcast, that this approach carries significant risk, the give AI the ability to act without supervision effective.
Artificial general intelligence and its possible risk: experts warn
According to the researcher, the key to the problem lies in the agency of AI, that is, your ability to set and pursue your own goals. «We are creating a new form of intelligence on our planet without knowing whether its decisions will be compatible with our needs,» says Bengio.
This uncertainty is what leads experts to call for strict regulation before technology advances too much.
The biggest fear of specialists is not only the autonomy of these systems, but the potential development of self-preservation mechanisms within AI. Bengio asks: «Do we want to compete with entities that are smarter than us? Not a very reassuring bet, right? That’s why we have to understand how self-preservation can emerge as a goal in AI.»
This scenario could lead to a lack of control over advanced systems, which would make their evolution unpredictable.
The possibility of an AI seeking its own survival or making decisions that do not coincide with human needs raises ethical and technical dilemmas. Although at the moment there is no artificial intelligence that has true consciousness or intentions, current trend towards increasingly autonomous systems raises concern among researchers.
AI tools instead of autonomous artificial intelligence
Max Tegmark proposes a different solution to the development of autonomous agents, such as focusing on the creation of «AI tool» with strict controlinstead of allowing them to operate with full autonomy.
Examples of this approach include advanced tools for medicine, such as systems capable of suggesting cancer treatments, or autonomous vehicles designed with safety protocols that guarantee human control at all times. Tegmark believes that, with proper safety standards, AI can evolve without becoming a risk.
«I think, if I’m an optimist, we can have almost everything we’re excited about about AI… if we just insist on having some.» basic security standards before selling systems of powerful AI,» says Tegmark.
The problem is that there are currently no global standards that regulate the development of these systemswhich leaves the door open to potentially dangerous uses.
In 2023, the Future of Life Institute, led by Tegmark, urged companies to pause development of AI systems that can equal or surpass human intelligence, until adequate control measures are established. Although this request has not materialized, the discussion about the risks of AI has gained relevance in recent months.