Andrew Ng at Snowflake: AI Agent Battle Royale
Andrew Ng was the keynote speaker last week on Day Two of the Snowflake BUILD conference, and in that talk, he shared results from testing different kinds of agentic workflows […]
Jeffrey Spyropoulos: Making Analytics Count at JCP Jim Griffin
Tapan Khopkar: A ‘MasterClass’ in Marketing Mix Jim Griffin
Aida Farahani: From 2D to 3D in Seconds Jim Griffin
Nikhil Patel: Inside Sally Beauty’s Data Strategy Jim Griffin
Victor Perrine: From Bananas to $Billions Jim Griffin
Ray Pettit: New Models for AI Literacy? Jim Griffin
Ivan Pinto: A Year of AI Testing in Software Dev Jim Griffin
Sam Marks: Big Data, Big Bad Bruins Jim Griffin
Since the architecture for the powerful o1 reasoning model from OpenAI has not been disclosed, there’s a lot of curiosity about how it works. To get a better understanding of that, this video pulls together information from OpenAI itself, along with systematic tests that were published in a recent paper by members of OpenO1, which is a group which hopes to create an open source version of the o1 model.
First, performance of the o1 model is compared against four well-known open-source methods that are designed to achieve similar results.
Next, six types of reasoning strategies exhibited by the o1 model are described, and those methods are mapped to four very different problem sets: HotpotQA, Collie, USACO and AIME, covering commonsense reasoning, coding and math. The analysis shows that the choice of reasoning methods deployed by the o1 model is far from random. To the contrary, the choices the model makes about its problem-solving strategies are well matched to the problems presented.
Andrew Ng was the keynote speaker last week on Day Two of the Snowflake BUILD conference, and in that talk, he shared results from testing different kinds of agentic workflows […]
Copyright AI Master Group 2023-24