OpenAI and Mark Chen announce the OpenAI Safety Fellowship to fund independent research on AI safety and alignment, with applications open through May 4, 2026.
Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment
We’re excited to launch the OpenAI Safety Fellowship - supporting rigorous, independent research on AI safety and alignment
Applications are open through May 4, 2026!
Introducing the OpenAI Safety Fellowship, a new program supporting independent research on AI safety and alignment—and the next generation of talent.
We’re excited to launch the OpenAI Safety Fellowship - supporting rigorous, independent research on AI safety and alignment, including areas like evaluation, robustness, and scalable mitigations.
Introducing the OpenAI Safety Fellowship
supporting independent research on AI safety and alignment
the next generation of talent
“We’re excited to launch the OpenAI Safety Fellowship”
“supporting rigorous, independent research on AI safety and alignment”
“Applications are open through May 4, 2026!”
This finding is one of many signals tracked across Artificial Intelligence. The live feed updates every few hours with new authority voices, debates, and emerging ideas.
← Back to Artificial Intelligence