Gates, initially skeptical of AI’s rapid advancements, expressed astonishment at how sophisticated AI models like ChatGPT have become, acknowledging his lack of understanding about how these models encode complex information such as Shakespearean texts.
"I was very skeptical. I didn't expect ChatGPT to get so good," Gates said.
Altman assured listeners that the intricacies of AI encoding and operations would eventually be unraveled, highlighting ongoing efforts in interpretability research. He drew parallels between the elusive nature of understanding human brain function and the challenges of fully grasping AI’s internal workings.
Despite the vast scale of these models, Altman is optimistic about achieving a deeper comprehension over time, which he believes will significantly enhance their development and application. But apparently, there's still a lot of mystery around this, as even Altman admits that when they built GPT-1, they had "no deep understanding of how it worked or why it worked."
A focal point of their discussion was the anticipated advancements in AI, including multimodality (integrating inputs and outputs across different formats like text, image and video) and improvements in reasoning and reliability. Altman emphasized the importance of personalization and using individual data to tailor AI interactions, which he sees as critical for the technology’s evolution.
Trending: Invest alongside exec’s from Uber, Facebook and Apple in this wellness app Transforming a $5.6 TRILLION dollar industry.
Gates and Altman also explored the necessity of adaptive computing, where AI allocates computational resources based on the complexity of the task at hand, rather than uniformly across all processes. This approach is seen as essential for tackling more complex problems and advancing AI’s problem-solving capabilities.