The artificial intelligence that powers systems like OpenAI's ChatGPT, Google's Gemini and Meta’s Llama will not be able to attain human levels of intelligence, said Meta's AI head Yann LeCun, reports UNB.
In an interview published in the Financial Times on Wednesday, he gave an insight into how the tech giant expects to develop the technology going ahead, only weeks after its plans to spend massively frightened investors and destroyed hundreds of billions from its market worth, reports Forbes.
The models, commonly referred to as LLMs, are trained using massive quantities of data, and their capacity to properly respond to prompts is restricted by the type of the data on which they are trained, according to LeCun, implying that they are only accurate when given the appropriate training data, it said.
LLMs have a "limited understanding of logic," lack enduring memory, do not understand the physical world, and cannot plan hierarchically. LeCun said, adding that they “cannot reason in any reasonable definition of the term.”
Because they are only accurate when fed the correct training data, LeCun, considered one of three "AI godfathers" for his fundamental contribution in the field, stated that LLMs are also "intrinsically unsafe" and that researchers seeking to produce human-level AI should look at other models, the report said.
LeCun stated that he and his roughly 500-strong team at Meta's Fundamental AI Research lab are working to develop an entirely new generation of AI systems based on an approach known as "world modelling," in which the system builds an understanding of the world around it in the same way that humans do and develops a sense of what would happen if something changed as a result, added the report.
LeCun predicted that human-level AI may take up to ten years to create using the world modelling technique.
Bd-pratidin English/Tanvir Raihan