Behind the Curtain of Figma AI
The recent announcement of Figma AI generated both excitement and controversy. This video summarizes the new AI features in under three minutes, for this popular design tool that’s used for […]
Marwa Kechaou: A Keen Eye for Computer Vision Jim Griffin
Iqbal Hossain: The UofAZ Knowledge Map Story Jim Griffin
Jeffrey Spyropoulos: Making Analytics Count at JCP Jim Griffin
Tapan Khopkar: A ‘MasterClass’ in Marketing Mix Jim Griffin
Aida Farahani: From 2D to 3D in Seconds Jim Griffin
Nikhil Patel: Inside Sally Beauty’s Data Strategy Jim Griffin
Victor Perrine: From Bananas to $Billions Jim Griffin
Ray Pettit: New Models for AI Literacy? Jim Griffin
A top researcher at Google DeepMind just released an important paper, “Mixture of a Million Experts.” As the paper’s title announces, it describes an approach that resulted in the first-known Transformer model with more than a million experts.
For context, the number of experts currently seen in smaller models varies between 4 and 32, and ranges up to 128 for most of the bigger ones.
This video reviews the Mixture-of-Experts method, including why and where it’s used, and the computational challenges associated with doing this. Next, it summarizes the findings of another important paper from earlier this year, where a new scaling law was introduced for Mixture-of-Experts models. That sets us up to review the “Million Experts” paper by Xu He.
The video then describes two key strategies that enabled scale to over a million experts by creating experts that are only a single neuron large. Next, it shares a process map for the new approach, and concludes with ideas about where this might be most relevant, including applications that involve continuous data streams.
The recent announcement of Figma AI generated both excitement and controversy. This video summarizes the new AI features in under three minutes, for this popular design tool that’s used for […]
Copyright AI Master Group 2023-24