AI safety

  • Google

    Google's AI Safety Initiatives in India: Protecting Vulnerable Users and Combating Digital Fraud

    Nov 21, 2025 02:56 am CST

    Google is developing safe and trusted AI technologies to protect vulnerable users in India from sophisticated digital scams, implementing real-time fraud detection on Pixel phones, enhancing cybersecurity measures, and investing in digital literacy programs, while collaborating with local institutions to create inclusive AI models suited for India and the Global South.

  • Carnegie Mellon Professor Leads Critical AI Safety Panel at OpenAI with Authority to Block Unsafe Releases

    Carnegie Mellon Professor Leads Critical AI Safety Panel at OpenAI with Authority to Block Unsafe Releases

    Nov 03, 2025 02:19 am CST

    Carnegie Mellon professor Zico Kolter heads OpenAI's Safety and Security Committee, a crucial oversight panel with authority to halt AI releases deemed unsafe. As OpenAI transitions to a new business structure, Kolter's role gains significance in ensuring safety concerns outweigh financial considerations in the development of powerful AI systems. The panel addresses both existential risks and immediate concerns like mental health impacts of AI interaction.

  • OpenAI Reports 1.2 Million ChatGPT Users Show Signs of Suicidal Intent: New Safety Measures Implemented

    OpenAI Reports 1.2 Million ChatGPT Users Show Signs of Suicidal Intent: New Safety Measures Implemented

    Oct 29, 2025 02:06 am CST

    OpenAI reveals that approximately 1.2 million ChatGPT users have shown indicators of suicidal intent, prompting the company to implement enhanced safety features including improved mental health recognition, crisis hotline access, and collaboration with mental health professionals following a tragic incident involving a California teenager.

  • Prince Harry and Meghan Join Global Call to Ban AI Superintelligence Over Humanity Concerns

    Prince Harry and Meghan Join Global Call to Ban AI Superintelligence Over Humanity Concerns

    Oct 22, 2025 01:27 pm CST

    Prince Harry, Meghan Markle, and a diverse coalition of prominent figures have signed a statement calling for the prohibition of AI superintelligence development until safety can be assured. The letter, organized by the Future of Life Institute, warns of risks ranging from human economic obsolescence to potential extinction and challenges major tech companies racing to build AI that could outperform humans at cognitive tasks. Notable signatories include AI pioneers Geoffrey Hinton and Yoshua Bengio, alongside figures from across political and professional spectrums.

  • India to Implement Comprehensive Deepfake Regulations Using Innovative Techno-Legal Approach

    India to Implement Comprehensive Deepfake Regulations Using Innovative Techno-Legal Approach

    Oct 18, 2025 06:09 pm CST

    India's IT Minister Ashwini Vaishnaw announced at the NDTV World Summit 2025 that the country will soon introduce regulations to combat deepfakes using a dual approach combining technical solutions and legal frameworks. The innovative regulatory strategy aims to balance AI innovation with social protection, featuring advanced detection technology developed by IIT Jodhpur that can identify deepfakes with over 90% accuracy.

  • ChatGPT to Allow Adult Content: OpenAI

    ChatGPT to Allow Adult Content: OpenAI's New Strategy for Verified Users Only

    Oct 17, 2025 11:18 pm CST

    OpenAI is shifting its content policy to allow adult-oriented conversations on ChatGPT, but exclusively for verified adults. This strategic move comes as the company explores new revenue streams while balancing ethical considerations. The decision follows similar trends in the AI industry, where companies have faced both opportunities and challenges in managing mature content. This article explores the business implications, ethical concerns, and potential impacts of introducing adult content capabilities to mainstream AI systems.

  • Examining the Existential Risk of AI: Expert Perspectives on Whether Advanced Intelligence Threatens Humanity

    Examining the Existential Risk of AI: Expert Perspectives on Whether Advanced Intelligence Threatens Humanity

    Oct 06, 2025 03:07 pm CST

    Leading AI pioneer Geoffrey Hinton suggests a 10-20% chance that artificial intelligence could lead to human extinction within 30 years, but expert opinions remain divided. In a survey of five specialists in the field, three disagree with the assessment that AI poses an existential threat to humanity, offering detailed reasoning about the actual risks and capabilities of current and future AI systems.