AI risks
Carnegie Mellon Professor Leads Critical AI Safety Panel at OpenAI with Authority to Block Unsafe Releases
Nov 03, 2025 02:19 am CST
Carnegie Mellon professor Zico Kolter heads OpenAI's Safety and Security Committee, a crucial oversight panel with authority to halt AI releases deemed unsafe. As OpenAI transitions to a new business structure, Kolter's role gains significance in ensuring safety concerns outweigh financial considerations in the development of powerful AI systems. The panel addresses both existential risks and immediate concerns like mental health impacts of AI interaction.
Examining the Existential Risk of AI: Expert Perspectives on Whether Advanced Intelligence Threatens Humanity
Oct 06, 2025 03:07 pm CST
Leading AI pioneer Geoffrey Hinton suggests a 10-20% chance that artificial intelligence could lead to human extinction within 30 years, but expert opinions remain divided. In a survey of five specialists in the field, three disagree with the assessment that AI poses an existential threat to humanity, offering detailed reasoning about the actual risks and capabilities of current and future AI systems.

