This AI Info session, presented with the Office of Responsible AI (ORAI), features Enrique Noriega-Atala discussing From Next-Word Prediction to Hallucinations: How Large Language Models Really Work. The talk demystifies how large language models generate text, explaining the mechanics behind next-word prediction, why hallucinations occur, and what these behaviors reveal about the capabilities and limits of contemporary AI systems.
View recording
Browse all AI Info Recordings
Visit the AI Info page for information on how to become a presenter or view more information on our AI Info sessions.