OpenAI, creators of ChatGPT, reported acting within 24 hours to disrupt deceptive AI uses in covert operations targeting Indian elections, resulting in no significant audience increase. According to a report on its website, OpenAI identified STOIC, an Israeli political campaign management firm, generating content related to both Indian elections and the Gaza conflict.
Minister of State for Electronics & Technology Rajeev Chandrasekhar commented on the report, asserting that BJP4India was the target of influence operations. OpenAI detailed that the activity of STOIC, a commercial company in Israel, was disrupted, not the company itself. In May, this network began producing comments critical of the ruling BJP party and supportive of the opposition Congress party. OpenAI stated it disrupted these activities within 24 hours of their initiation. They banned a cluster of accounts from Israel that were creating and editing content for an influence operation spanning X, Facebook, Instagram, websites, and YouTube. Initially targeting audiences in Canada, the United States, and Israel, the operation began focusing on Indian audiences with English-language content in early May.
A report indicated that 82% of Indians opposed the use of Generative AI in election campaigns. Chandrasekhar emphasized the threat posed by such operations to democracy, highlighting the need for thorough scrutiny and investigation. He criticized the timing of the platform’s disclosure, suggesting it should have been released earlier.
OpenAI affirmed its commitment to developing safe and beneficial AI, with investigations into covert influence operations (IO) forming part of a broader strategy for safe AI deployment. They emphasized enforcing policies to prevent abuse and improve transparency around AI-generated content, particularly in detecting and disrupting covert IO.
OpenAI disclosed that in the last three months, they had disrupted five covert IOs using their models for deceptive activities. As of May 2024, these campaigns had not significantly increased audience engagement or reach through OpenAI’s services. They codenamed the disrupted operation “Zero Zeno,” involving the use of their models to generate articles and comments posted across various platforms, including Instagram, Facebook, X, and related websites.
The content from these operations covered a wide range of topics, including Russia’s invasion of Ukraine, the Gaza conflict, Indian elections, European and U.S. politics, and criticisms of the Chinese government. OpenAI adopts a multi-pronged approach to combat platform abuse, involving monitoring and disrupting threat actors, investing in technology and teams, and collaborating within the AI ecosystem to highlight and address potential misuses of AI.