We use cookies to ensure that we give you the best experience on our website. Read privacy policies.
In the lead-up to a year with numerous major elections worldwide, artificial intelligence (AI) companies have taken steps to prevent the misuse of AI in the electoral process. Last month, OpenAI, the developer of ChatGPT, announced measures to prevent the use of its tools for creating chatbots that impersonate real people or institutions. Google and Meta have also taken actions to limit AI chatbots' responses to election-related prompts and to better label AI-generated content on their platforms.
On February 16, 2024, 20 tech companies, including Adobe, Amazon, Anthropic, Google, Meta, Microsoft, OpenAI, TikTok, and X, signed a voluntary pledge to combat deceptive AI content in elections. The accord, announced at the Munich Security Conference, includes commitments to collaborate on AI detection tools and other actions, but it does not call for a ban on election-related AI content. The efforts are part of a broader push by AI companies to address the risks associated with AI in elections, as concerns grow about the potential for AI-generated content to deceive voters.
In Big Election Year, A.I.’s Architects Move Against Its Misuse
Thank you for subscribing!