Discussion about this post

User's avatar
joe's avatar

This is really exciting+admirable+valuable!

The core disagreement seems to hinge on what constitutes the primary bottleneck for AI progress.

- AI 2027 -> main bottleneck = internal capability (AI ability to reason, code, do research) which once automated overcomes other obstacles.

- AI as normal tech -> main bottleneck = external (need feedback loop with real world, adoption, infra, solving long tail of errors)

What specific, observable evidence in the next 2-3 years would cause each side to most update their view?

For Arvind and Sayash: Are you assuming we don't see some technique for enabling meta-learning or continual learning? Unclear why one would think the main bottleneck is brute-force iterative feedback otherwise. Also, even if AI does need massive real-world feedback, it's unclear this is a real bottleneck. We live in a world saturated with sensors, deployed AI systems, web-scale human interaction data. To ground out a scenario, do you basically believe it's extremely unlikely we'll have AI systems which handle complex travel booking after seeing 10 examples rather than 10,000?

For the AI 2027 authors, a sharper crux: does your timeline depend on qualitative breakthroughs in AI capabilities, or does current-paradigm scaling suffice? More specifically, if by 2027 AI systems automate 60-70% of AI R&D tasks—handling implementation, optimization, debugging, running experiments—but humans remain essential for creative hypothesis generation, research direction, and conceptual breakthroughs, does that keep us in "normal tech" territory? In this scenario, research accelerates as humans focus on the creative bottlenecks while AI handles the grunt work, but there's no recursive takeoff because the hardest parts of research (the novel insights that unlock new paradigms) remain human-dependent. Does your short timeline rely on AI matching humans at every aspect of research including creative breakthroughs, or is being superhuman at 70% of research tasks (the engineering-heavy parts) sufficient to trigger the dynamics you expect?

Expand full comment
Anthony Bailey's avatar

Huge credit to all concerned for getting together in order to be explicit about where you *are* commonly concerned.

Vital to avoid the claim that lack of agreement means all caution should be thrown to the wind - thanks for removing ammunition from those who claim this.

Expand full comment
18 more comments...

No posts

Ready for more?