Home > Videos

โณ๐Ÿ“…๐Ÿ—“๏ธ๐Ÿš€ The next 36 months will be WILD

๐Ÿค– AI Summary

  • ๐Ÿค– Everyone is talking about AGI and ASI, and recursive self-improvement or RSI, for the 2027 to 2028 window (0:01).
  • ๐Ÿค– Dario Amade, from Anthropic, predicts powerful AI matching Nobel level capability, a country of geniuses in a data center, around 2027-2028 (0:31).
  • ๐Ÿค– The AI 2027 reportโ€™s modal prediction for AGI is 2027-2028, with the window collapsing from decades to months (0:49).
  • ๐Ÿค– Jensen Huang said AGI will be competitive on broad human tests within 5 years (1:04).
  • ๐Ÿค– Sam Altman stated that a research intern equivalent AI will exist by 2026, and superintelligence by 2028 (1:13).
  • ๐Ÿค– Recursive self-improvement is expected even earlier, sometime this year or 2027 at the latest (1:51).
  • ๐Ÿค– The meter eval shows a median autonomous success rate of 14.5 hours, with a 95th percentile up to 90+ hours as of February 2026 (2:31).
  • ๐Ÿค– Machine autonomy is doubling every 90 to 100 days now, meaning three more doublings could occur this year alone (3:08).
  • ๐Ÿค– Five things are expected for recursive self-improvement: algorithmic research (math), data generation and curation, writing and executing code, training models, and model evaluations (4:10).
  • ๐Ÿค– Algorithmic research, data, and code are the hard parts that are being advanced (10:21).
  • ๐Ÿค– The industrial siege refers to the AI race and the point of no return, driven by companies and nations racing (10:39).
  • ๐Ÿค– The point of no return is characterized by sunk costs, where companies are fully committed to AI investment and cannot back down (12:17).
  • ๐Ÿค– Pausing AI development is fatal, and no one is going to slow down (13:21).
  • ๐Ÿค– Chips are no longer a bottleneck; high bandwidth memory is the current bottleneck, but it is expected to be solved within 12 to 24 months (13:37).
  • ๐Ÿค– Energy is the next big bottleneck, but solutions like microgrids (solar, natural gas) and traditional nuclear power are being pursued (16:09).
  • ๐Ÿค– Small modular reactors (SMRs) are being researched but are not expected until 2028-2030 (16:59).
  • ๐Ÿค– Anti-data center sentiment is rising and is bipartisan, which is a concern for national progress (17:40).
  • ๐Ÿค– People often expect AGI to be defined by human-like consciousness or conversation, but the real measurement of AI is its economic impact (19:57).
  • ๐Ÿค– Autonomous projects and multi-agent swarms are coming quickly, especially in cyberspace (20:44).
  • ๐Ÿค– Persistent memory and tool fluency are also key capabilities coming by 2028 (21:15).
  • ๐Ÿค– Economic impact and job dislocation are already happening, though not well-tracked, demonstrating Solowโ€™s paradox 2.0 (22:21).
  • ๐Ÿค– The J curve of productivity shows that initial investment makes economies less efficient before productivity explodes and labor decouples (23:31).
  • ๐Ÿค– The automation cliff is not mass firings but ghost jobs - positions never filled or created, evident in entry-level hiring freezes and long job latency for new grads (24:30).
  • ๐Ÿค– AI is in an augmentation phase, but once it becomes reliable and easy to deploy, it will become mandatory, leading to an automation cliff (25:30).
  • ๐Ÿค– Risks are categorized by impact and predictability: unknown societal outcomes and x-risk are high impact, low predictability (27:35).
  • ๐Ÿค– Scaling laws (model capabilities, compute growth) are high impact, high predictability (28:28).
  • ๐Ÿค– Market forces and bottlenecks like energy and memory are low impact, high predictability, as the market is already solving them (29:01).
  • ๐Ÿค– Leading indicators to watch are meter scores crossing 24 hours, high bandwidth memory no longer being in the news, capex crossing the trillion dollars per year threshold, and continued implosion of entry-level hiring (29:20).
  • ๐Ÿค– People generally fall into four groups regarding AI: doomers (high significance, bad outcome), benign/optimistic accelerationists (high significance, good outcome), technoskeptics (low significance), and those with realistic engineering concerns (problematic but solvable) (30:37).

๐Ÿค” Evaluation

  • ๐Ÿค” The video presents a highly optimistic and accelerationist view of AI development, predicting AGI and ASI within the next 36 months (0:01-1:59).
  • ๐Ÿค” This perspective aligns with certain tech figures like Sam Altman and Jensen Huang (1:04-1:13) but may contrast with more cautious viewpoints from researchers like Geoffrey Hinton or Yoshua Bengio, who emphasize the need to slow down and prioritize safety.
  • ๐Ÿค” The video highlights the rapid progress in AI capabilities, citing metrics like autonomous success rates (2:31-3:49) and advancements in algorithmic research, data generation, and code writing (4:10-7:51).
  • ๐Ÿค” These technical advancements are widely acknowledged within the AI community, but their precise implications and timelines for AGI remain a subject of debate. For example, some argue that current AI progress, while impressive, does not inherently guarantee the swift emergence of AGI, as explained in ๐Ÿค–๐Ÿ๐Ÿ”Ž AI Snake Oil by Arvind Narayanan and Sayash Kapoor.
  • ๐Ÿค” The video discusses the industrial siege, driven by competitive race dynamics among companies and nations, leading to an all-in win or lose mentality where pausing is seen as fatal (10:37-13:23).
  • ๐Ÿค” This competitive landscape is indeed a significant factor in AI development, as explored in books like The Coming Wave by Mustafa Suleyman, which addresses the containment problem of rapidly advancing technology. However, external regulatory bodies and international cooperation initiatives, not extensively discussed in the video, also play a role in shaping this race.
  • ๐Ÿค” Bottlenecks like chips and high bandwidth memory are presented as temporary and solvable by market forces (13:27-14:36), while energy is acknowledged as a larger challenge but also with proposed local solutions like microgrids (16:09-17:29).
  • ๐Ÿค” While market forces can indeed alleviate supply chain issues, the long-term energy demands of AI data centers and their environmental impact are complex issues with ongoing research and policy debates, as detailed in Atlas of AI by Kate Crawford, which examines the hidden costs of AI.
  • ๐Ÿค” The video posits that the economic impact of AI, particularly job displacement through ghost jobs and jobless growth, is already occurring and will accelerate (22:19-25:12).
  • ๐Ÿค” This idea of labor decoupling from economic output is a recurring theme in discussions about automation. However, the precise extent and speed of job displacement, and the emergence of new job categories, are areas of ongoing economic analysis and vary across different sectors.
  • ๐Ÿค” A topic to explore for a better understanding is the role of global governance and ethical frameworks in managing the risks and ensuring equitable benefits of rapidly advancing AI. While the video touches on risk categories, it doesnโ€™t delve deeply into the mechanisms for international oversight or the development of ethical AI principles, which are critical areas of discussion in the AI ethics community and sources like Stanford Human-Centered Artificial Intelligence (HAI).

โ“ Frequently Asked Questions (FAQ)

๐Ÿค– Q: What is the estimated timeline for artificial general intelligence (AGI) and artificial superintelligence (ASI)?

๐Ÿค– A: Experts are converging on a 2027 to 2028 window for AGI and ASI, with recursive self-improvement potentially occurring even earlier, possibly in 2027 (0:01-1:59).

๐Ÿค– Q: What are the key indicators for the progress of recursive self-improvement in AI?

๐Ÿค– A: Five specific elements are expected: algorithmic research, data generation and curation, writing and executing code, training models, and model evaluations (4:10). Advances in math, data, and code are considered the most challenging parts (10:21).

๐Ÿค– Q: How are economic impacts of AI, such as job displacement, being observed?

๐Ÿค– A: The economic impact is already happening, characterized by ghost jobs which are positions never created or filled, leading to jobless recoveries and growth. This is evident in entry-level hiring freezes and increased job latency for new graduates (24:00-25:12).

๐Ÿ“š Book Recommendations

โ†”๏ธ Similar

๐Ÿ†š Contrasting

  • ๐Ÿค–๐Ÿ๐Ÿ”Ž AI Snake Oil by Arvind Narayanan and Sayash Kapoor cuts through the hype surrounding artificial intelligence, revealing its limitations and what it truly can and cannot do.
  • Artificial Intelligence: A Guide for Thinking Humans by Melanie Mitchell provides an accessible explanation of AIโ€™s current capabilities and fundamental limitations.