New Method Runs Big LLMs on Smartphones
There’s a big breakthrough that just came out for handling large language models on smartphones. It’s called PowerInfer-2 and what it does is look at every option for a processing […]
Iqbal Hossain: The UofAZ Knowledge Map Story Jim Griffin
Jeffrey Spyropoulos: Making Analytics Count at JCP Jim Griffin
Tapan Khopkar: A ‘MasterClass’ in Marketing Mix Jim Griffin
Aida Farahani: From 2D to 3D in Seconds Jim Griffin
Nikhil Patel: Inside Sally Beauty’s Data Strategy Jim Griffin
Victor Perrine: From Bananas to $Billions Jim Griffin
Ray Pettit: New Models for AI Literacy? Jim Griffin
Ivan Pinto: A Year of AI Testing in Software Dev Jim Griffin
This video shares details about a remarkable experiment by researchers in Tokyo, who teamed up with Oxford and Cambridge Universities to study whether large language models might now be able to write code that improves their own performance.
The answer was Yes.
Not only that, the model created a whole new approach that placed it at the top of a leaderboard, using a novel method that had not yet been tried or documented in any academic research paper. How can that happen?
The video describes how the model alternated between different kinds of strategies, just like a data scientist might do, resulting in an innovative new loss function, with several interesting properties. In short, the model was systematically generating hypotheses and testing them. Finally, the video identifies five aspects of the research question that can potentially be generalized, and it names three ways in which the findings might be applied to new problem sets, including to virtual reality. . .
There’s a big breakthrough that just came out for handling large language models on smartphones. It’s called PowerInfer-2 and what it does is look at every option for a processing […]
Copyright AI Master Group 2023-24