The Dwarkesh podcast with Ilya Sutskever was great. Well worth listening to.
A summary:
"here are the most important points from today's ilya sutskever podcast:
- superintelligence in 5-20 years - current scaling will stall hard; we're back to real research - superintelligence = super-fast continual learner, not finished oracle - models generalize 100x worse than humans, the biggest AGI blocker - need completely new ML paradigm (i have ideas, can't share rn) - AI impact will hit hard, but only after economic diffusion - breakthroughs historically needed almost no compute - SSI has enough focused research compute to win - current RL already eats more compute than pre-training"@slow_developer
"One point I made that didn’t come across:- Scaling the current thing will keep leading to improvements. In particular, it won’t stall.- But something important will continue to be missing."@ilyasut
https://www.youtube.com/watch?v=aR20FWCCjAs