Artificial intelligence has proven to be an extraordinary collaboration tool, but there are already those who consider that lacks foundation. From what I’ve seen to date, I completely agree.
Its arrival and implementation at all levels has not left anyone indifferent. There is no doubt about that, because every day that passes we make new news about her. Yes It takes away people’s jobs, because they are more intelligent than people… Here I must qualify.
Isn’t that what all technology companies are looking for? The one they call General Artificial Intelligence (AGI, for its acronym in English) and on which companies have already focused all their efforts like Meta, Google and OpenAI
Yes, apparently that is the objective of the great gurus of the sector, but I think that, before achieving what until now many believe is impossible, it would be advisable to review where current AI is, how we are applying it, if its results are reliable and, above all, if It’s saving us the time we thought we were going to gain.
Spoiler: no and, in fact, we are dedicating much more to correcting its errors than to taking advantage of its virtues, not to mention that it is giving us a very limited view of certain topics based on our consultations.
AI to review
Although AI seems to know how to do everything, the technology is full of errors. Among other things, because the information it uses to offer its results also contains its flaws.
This is one of the reasons why human labor will always be necessary, which can distinguish errors and eliminate them while the AI works or at the end of the procedurebefore solving certain tasks.
The problem is that the pace of machines is not the same as that of humans and there have already been cases in which humans have not been able to cope to correct all the mistakes that artificial intelligence makes.
Precisely within the workplace, a new term related to this has been born, known as AI Workslopwhich is AI-generated content that appears to have great value but in reality lacks substance and is not at all useful in advancing or completing certain tasks.
This has generated big problems in some of the companies that have jumped into the pool and invested in AI, since, according to A recent report by Stanford Social Media Lab and BetterUp Labs, 95% of organizations do not see a measurable return on what they have paid to integrate tools powered by this technology.
This, among other things, is due to the original approach of generative AI and its way of proceeding to date, which, most of the time, is that of collect and analyze data for later spit them out without any sense.
Neither own criteria nor very specific
Artificial intelligence usually does not give opinions or provide value judgments. He doesn’t do it precisely to avoid making mistakes. Nor give risky answersbut opts for a neutral position so that it is the user who makes a certain decision, whether good or bad.
Neither is it very specific nor does it interpret the information offered to it with a certain common sense, but it limits itself to summarizing the information that has been requested and with which it works, so that it is the person who uses it who is in charge of choosing what works for him or not.
And that’s where the problem is: It is getting us used to staying with the superficial instead of the concrete, with a general idea of things. In fact, it is leading us to not know how to separate the chaff from the wheat, the relevant from the bulk and, ultimately, the original from the hackneyed.
Therein lies the real problem with this technology, that, with tools like Copilot, Gemini or ChatGPT, as well as features such as Google AI Modeprovides very generic ideas, usually repetitive and, what is worse, an excess of information that does not resolve specific doubts.
This means that if we request that a summary of the Second World War, for example, It is very likely that it will focus on information that is already known and not in some anonymous episodes that may better answer certain questions about her.
Although there are exceptions, because it all depends on how the prompt or question in the text prompt is nuanced, AI tends to drive what also is known as infoxication or excess information which, in turn, perpetuates unbiased approaches and assessments.
Given that AI is not only making us work more but is also providing us with information of lesser value, biased and mostly repetitive, is it?How is it possible that the most cutting-edge companies continue fighting to go one step further? without fixing the scenario we find ourselves in now? It’s the market, friends..