Andrew Curran suggests OpenAI and Anthropic finishing capable models around early March could be mainly due to scale, arguing Q1 2026 may be the first time anyone had enough compute to train at that level.
then this is potentially purely a result of scale.
Q1 2026 was just the first time anyone had enough compute to train at this level.
This finding is one of many signals tracked across Artificial Intelligence. The live feed updates every few hours with new expert voices, debates, and emerging ideas.
← Back to Artificial Intelligence