Testing Ollama on Hard Questions
Ollama is a popular platform for running language models on your local machine, with access to almost 100 different open source models, including llama-3 from Meta, Phi3 from Microsoft, Aya […]
Iqbal Hossain: The UofAZ Knowledge Map Story Jim Griffin
Jeffrey Spyropoulos: Making Analytics Count at JCP Jim Griffin
Tapan Khopkar: A ‘MasterClass’ in Marketing Mix Jim Griffin
Aida Farahani: From 2D to 3D in Seconds Jim Griffin
Nikhil Patel: Inside Sally Beauty’s Data Strategy Jim Griffin
Victor Perrine: From Bananas to $Billions Jim Griffin
Ray Pettit: New Models for AI Literacy? Jim Griffin
Ivan Pinto: A Year of AI Testing in Software Dev Jim Griffin
Last week, NVIDIA announced Nemotron-4, which consists of three models: Base, Instruct and Reward. These three models work together within the NeMo framework to enable the creation and fine-tuning of new large language models.
At 340 billion parameters, this new entrant far bigger than any other open source model, but the really big news is that Nemotron-4 comes with a permissive license that allows us to use the model to generate synthetic data at scale, for the purpose of creating new models of our own.
Until now, most big models and APIs had clauses in the user agreements that explicitly forbid using the data they generate for the purpose of creating a new model. This video provides a full summary of the size, performance, technical report, and competitive position of Nemotron-4, and it describes what each of the three models do, including production of synthetic data and the five-dimension framework that’s used for model evaluation.
Ollama is a popular platform for running language models on your local machine, with access to almost 100 different open source models, including llama-3 from Meta, Phi3 from Microsoft, Aya […]
Copyright AI Master Group 2023-24