Gary Kazantsev explores the Future of AI in “Whither AI? 2.0”
Автор: Intellectual Salon Series
Загружено: 2026-02-21
Просмотров: 10
Описание:
Gary Kazantsev, Head of Quant Technology Strategy in the Office of the CTO at Bloomberg, delivered a wide-ranging and thought-provoking talk as part of the Intellectual Salon Series, titled “Whither AI? 2.0.” The talk charted the trajectory of artificial intelligence technologies, from foundational definitions to cutting-edge developments, with a focus on the evolving role of large language models (LLMs) and their practical, societal, and philosophical implications.
Gary Kazantsev a leader in applied machine learning and natural language processing, helps direct Bloomberg’s strategic thinking at the intersection of quantitative products and AI, leveraging his background in computer science, mathematics and linguistics to bridge theoretical innovation and real-world applications. He has led the development of machine learning solutions that power many of Bloomberg’s products and is a recognized voice in shaping research strategy and the discussion on the trajectory and impact of AI.
During the talk, Gary Kazantsev began by disentangling terms like AI, machine learning, deep learning, and data science - emphasizing their intersections and distinctions through a conceptual Venn diagram. He placed these in the broader context of computer science and statistics, offering a clarifying foundation for understanding current developments.
He then traced the arc of technical progress, noting the exponential pace of ML research, the rise of open-weight models like LLaMA 3 and DeepSeek, and advances in architectures such as AlphaGeometry. Gary Kazantsev highlighted the expansion of applications across domains, spanning finance, healthcare, materials science, and even legal systems. The presentation emphasized how today’s generative LLMs differ from earlier systems in their breadth, accessibility, and surprising generality - despite known limitations in reasoning, planning, and robustness.
Another theme of the talk was the organizing principles behind modern LLM training. By teaching models to guess the next word in a sequence, they absorb vast statistical regularities from language - an approach that, while conceptually simple, gives rise to surprisingly complex capabilities. Kazantsev underscored how this fundamental mechanism compels us to think from first principles about what these systems can (and cannot) do. The presentation also emphasized that AI is not new. Drawing from historical context, Kazantsev traced decades of innovation, from symbolic systems to statistical models, reminding the audience that the field has always cycled through phases of enthusiasm, overreach, and recalibration. Today’s breakthroughs, he noted, are built upon a deep and layered legacy of interdisciplinary research spanning computer science, linguistics, statistics, and cognitive science. #IntellectualSalonSeries
Повторяем попытку...
Доступные форматы для скачивания:
Скачать видео
-
Информация по загрузке: