Large language models like ChatGPT AI have made significant progress in their ability to answer complex questions in recent years, but they still have limitations.
NVIDIA CEO Jensen Huang says today’s artificial intelligence isn’t giving the best answers, and that the industry is still “years away” from an AI we can “significantly trust,” Business Insider reports.
As for the problem of hallucinations, or tendency to give false but plausible answers, the head of the world’s most valuable company believes that people shouldn’t second-guess an Artificial intelligence answer by wondering whether it’s “hallucination” or “intelligent.”
Hallucinations, or providing false or fictitious answers, are a persistent problem with AI chatbots. For example, there was a case where an American lawyer used fictitious citations generated by ChatGPT in court. It was later discovered that the OpenAI chatbot had referred to non-existent cases in its responses.
NVIDIA’s CEO has proposed requiring language systems to examine and validate answers from trusted sources before returning a result. He says the process could be similar to “fact-checking” in journalism: comparing facts from sources with known truths, and if the answer is partially inaccurate, discarding the entire source.

Earlier, it was reported that Google was investing in nuclear energy due to the growing appetite of AI, and Microsoft “resurrected” a decommissioned nuclear reactor to power servers.
In the history of the United States, there has never been a case where a closed nhuclear power plant was brought back into operation, much less supplied all the energy to one customer.
In meantime, OpenAI is anticipated to release a new AI model in the wintertime. According to the developers, this model should reach the level of general artificial intelligence (AGI). That is, communication with it will resemble communication with a living person.