Carnegie Mellon Professor Leads Critical AI Safety Panel at OpenAI with Authority to Block Unsafe Releases
- Date & Time:
- |
- Views: 18
- |
- From: India News Bull
This Professor Leads An OpenAI Panel With Power To Halt Unsafe AI Releases

Zico Kolter began studying AI as a Georgetown University freshman. (Image posted on X by @zicokolter)
A Carnegie Mellon University professor currently holds one of the most crucial positions in the technology industry for those concerned about artificial intelligence's potential risks to humanity.
Zico Kolter heads a 4-person panel at OpenAI with the authority to stop the ChatGPT creator from releasing new AI systems if they're deemed unsafe. This could include technology powerful enough for malicious actors to develop weapons of mass destruction, or even poorly designed chatbots that could harm users' mental health.
"We're definitely not just focused on existential concerns," Kolter explained in an Associated Press interview. "We're addressing the complete range of safety and security issues and critical topics that emerge when discussing these widely deployed AI systems."
OpenAI appointed the computer scientist as chair of its Safety and Security Committee over a year ago. However, the role gained increased significance last week when California and Delaware regulators made Kolter's oversight a fundamental component of agreements allowing OpenAI to establish a new business structure for easier capital raising and profit generation.
Safety has been fundamental to OpenAI's mission since its founding as a nonprofit research laboratory a decade ago, with the goal of developing beneficial AI that surpasses human capabilities. However, following ChatGPT's release and the subsequent global AI commercial surge, the company has faced accusations of rushing products to market before ensuring their safety to maintain competitive advantage. Internal conflicts leading to CEO Sam Altman's temporary removal in 2023 brought these concerns about mission drift to wider attention.
The San Francisco-based organization encountered opposition—including a lawsuit from co-founder Elon Musk—when it began transitioning toward a more conventional for-profit structure to advance its technology.
Agreements announced last week by OpenAI with California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings aimed to address these concerns.
Central to these formal commitments is a pledge that safety and security decisions must take precedence over financial considerations as OpenAI forms a new public benefit corporation under the control of its nonprofit OpenAI Foundation.
Kolter will serve on the nonprofit's board but not the for-profit board. However, he will have "full observation rights" to attend all for-profit board meetings and access information about AI safety decisions, according to Bonta's memorandum of understanding with OpenAI. Kolter is the only individual, besides Bonta, specifically named in the extensive document.
Kolter indicated that these agreements largely confirm that his safety committee, established last year, will maintain its existing authorities. The other three members also serve on OpenAI's board—including former US Army General Paul Nakasone, former commander of US Cyber Command. Altman resigned from the safety panel last year in a move perceived as enhancing its independence.
"We can request delays in model releases until certain mitigations are implemented," Kolter explained. He declined to reveal whether the safety panel has ever halted or required mitigation for a release, citing confidentiality protocols.
Kolter noted various concerns about AI agents that will require attention in coming months and years, from cybersecurity—"Could an agent encountering malicious text online accidentally extract data?"—to security concerns regarding AI model weights, which are numerical values influencing AI system performance.
"There are also topics either emerging or specific to this new class of AI model without real parallels in traditional security," he said. "Do models enable malicious users to significantly increase their capabilities for designing bioweapons or conducting malicious cyberattacks?"
"Finally, there's the impact of AI models on people," he added. "The effects on mental health, the consequences of human-model interactions. All these issues need addressing from a safety perspective."
OpenAI has already faced criticism this year regarding its flagship chatbot's behavior, including a wrongful-death lawsuit from California parents whose teenage son died by suicide in April after extensive interactions with ChatGPT.
Kolter, director of Carnegie Mellon's machine learning department, began studying AI as a Georgetown University freshman in the early 2000s, well before it became mainstream.
"When I entered machine learning, it was an esoteric, niche field," he reflected. "We called it machine learning because nobody wanted to use the term AI, which was considered an outdated field that had overpromised and underdelivered."
Kolter, 42, has followed OpenAI for years and was sufficiently connected to its founders to attend its launch party at an AI conference in 2015. Nevertheless, he didn't anticipate AI's rapid advancement.
"Very few people, even those deeply involved in machine learning, truly anticipated our current situation—the explosion of capabilities and emerging risks we're witnessing now," he said.
AI safety advocates will closely monitor OpenAI's restructuring and Kolter's work. One of the company's prominent critics expresses "cautious optimism," particularly if Kolter's group "can hire staff and play a substantial role."
"His background seems appropriate for this position. He appears to be a good choice for this leadership role," said Nathan Calvin, general counsel at AI policy nonprofit Encode. Calvin, whom OpenAI served with a subpoena at his residence during fact-finding for its defense against Musk's lawsuit, wants OpenAI to remain faithful to its original mission.
"Some commitments could be significant if board members take them seriously," Calvin observed. "Alternatively, they could be merely words on paper disconnected from actual practices. We don't yet know which scenario we're facing."
(This article has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Source: https://www.ndtv.com/world-news/this-professor-leads-an-openai-panel-with-power-to-halt-unsafe-ai-releases-9563363