Cursor says it rebuilt how MoE models generate tokens on Blackwell GPUs, claiming 1.84x faster inference and more accurate outputs that help ship model updates more often.
We rebuilt how MoE models generate tokens on Blackwell GPUs, resulting in 1.84x faster inference and more accurate outputs.
These improvements directly contribute to how we train Composer
allowing us to ship improved versions of the model more often.
This finding is one of many signals tracked across Artificial Intelligence. The live feed updates every few hours with new authority voices, debates, and emerging ideas.
← Back to Artificial Intelligence