Mira Murati, the former director of OpenAi technology, has been a headache for Sam Altman for some time, Chatgpt’s father. In fact, when it was still part of the company, there by 2023, it was The first to admit that the artificial intelligence that they had created had to be in one way or another regulated by the authorities of each country.
Now, on the other hand, and after it has been out of the company for a while (the diaspora that Openai has suffered in recent times is worth analyzing), its purposes are others. Murati has created what to some extent could be considered a kind of “anti chatgpg”. And for this, nothing more and nothing less than some of the greatest experts in AI in the world has been surrounded.
Openai’s former director’s plans
The new company of Mira Murati has been baptized as Thinking Machines Lab. But, contrary to what could be thought of good at first, it will not be a new artificial intelligence. At least, not in conventional way such as Openai chatgpt. Your idea goes through Develop more accessible, customizable and understandable AI systems for the general public.
As she herself has commented and reflected several US media, her main purpose with Thinking Machines Lab It goes to reduce the gap between the rapid advances in AI and the public understanding of this technology. And how do you want to do it? Because through large -scale language models that promote scientific discoveries and similar things.
In any case, the most striking thing about their plans, and that seems to collide directly with the usual policies of Sam Altman, Depseek and other Aviards of AI, is that Murati does not intend to leave people out of the equation. That is, Sa intention that humans and AI work side by side, so to speakand not that machines largely replace people.
In the same way, both the former director of OpenAI and her colleagues have the objective of contributing to external research in AI alignment through the dissemination of code, data sets and models specifications. Or what is the same, to promote in this way an open approach that collaborates with the rest of the scientific community.
The safety of AI has to always be a priority
After what learned in OpenAi, Murati has made it clear on multiple occasions that security has to be essential when working on everything related to artificial intelligence. And, seeing the warnings of many other AI experts, it gives the impression that this is the predominant mentality among specialists.
Another thing, of course, are commercial ambitions, and knowing to what extent they can lead on the way different to other companies. Even more with the battle between China and the United States for giving the dominance of AI to red live.
Know How we work in NoticiasVE.
Tags: Artificial intelligence