OpenAI vs Anthropic: A Head-to-Head Comparison
In light of the events and news last week, many people are seeking to better understand the alternatives to OpenAI, and Anthropic is currently the alternative that most observers are […]
Iqbal Hossain: The UofAZ Knowledge Map Story Jim Griffin
Jeffrey Spyropoulos: Making Analytics Count at JCP Jim Griffin
Tapan Khopkar: A ‘MasterClass’ in Marketing Mix Jim Griffin
Aida Farahani: From 2D to 3D in Seconds Jim Griffin
Nikhil Patel: Inside Sally Beauty’s Data Strategy Jim Griffin
Victor Perrine: From Bananas to $Billions Jim Griffin
Ray Pettit: New Models for AI Literacy? Jim Griffin
Ivan Pinto: A Year of AI Testing in Software Dev Jim Griffin
In this video, we’ll look at the results of a recent study where participants classified AI-generated faces as genuine two-thirds of the time, which was significantly more often than they did for photos of real people.
We might understand why people can be fooled by a good fake – but how can it be that AI-generated images actually score even higher on looking genuine than photos of real people?
How can that happen?
I’ll share evidence to show good reasons why.
Along the way, we’ll briefly review how generative adversarial networks like StyleGAN2 work, and we’ll also create some AI-generated faces for ourselves!
The video ends with a brief review of some legitimate uses for creating AI-generated “faces of people that don’t exist.”
In light of the events and news last week, many people are seeking to better understand the alternatives to OpenAI, and Anthropic is currently the alternative that most observers are […]
Copyright AI Master Group 2023-24