OpenAI Gears Up for Upcoming Elections: Hiring Surge in Elections and Public Policy Staff

OpenAI GPT-4 AI chatbot release page on phone screen. Available via ChatGPT Plus. Open AI logo background. Swansea, UK - March 30, 2023.

There is no doubt that Artificial Intelligence will be a crucial ingredient in the upcoming elections in the U.S., Very similar to the phenomenon in Social Media created by Blue State Digital, the digital advertising agency founded in 2004 by former members of Howard Dean’s 2004 presidential campaign, the company played a significant role in Barack Obama’s 2008 and 2012 presidential campaigns.

The use of social media revolutionized how candidates communicate with voters but also created a massive challenge in terms of consistency of messaging. Another side effect of Social Media and political marketing is fake news. This phenomenon is not particular to the United States; the Council of Europe, for example, states that propaganda, misinformation, and fake news have the potential to polarise public opinion, promote violent extremism and hate speech, and, ultimately, undermine democracies and reduce trust in the democratic processes. It is considered one of the biggest challenges in many modern democracies. Eurobarometer studies suggest two-thirds of E.U. citizens report coming across fake news at least once a week, and half of E.U. citizens aged 15-30 say they need critical thinking and information skills to help them combat fake news and extremism in society. If this was the case with Social Media only imagine the effect of AI in upcoming elections.

AI in upcoming elections and fake news

If social media were a catalyst for fake news distribution, Artificial Intelligence (AI) would be the engine creating it. The speed at which content can be made using AI is incomparable to anything seen before. There is an understandable concern that the technology could be used to propagate fake images and articles. This was evident with images created when Pope Frances was ill or the legal proceedings of President Donald Trump. 

Open AI clearly understands this challenge, Sam Altman, CEO of the company. In an interview with ABC News, he stated, “I’m particularly worried that these models could be used for large-scale disinformation,” Altman said. “Now that they’re getting better at writing computer code, [they] could be used for offensive cyberattacks.” Could organizations use artificial intelligence language models such as ChatGPT to induce voters to behave in specific ways? Sen. Josh Hawley asked Altman this question in a U.S. Senate hearing on artificial Intelligence. Altman replied that he was concerned that some people might use language models to manipulate, persuade and engage in one-on-one interactions with voters.

The company has recently started hiring congressional liaisons, probably due to the recent agreement with the Biden Administration to address the promises and potential risks of Artificial Intelligence (AI). Seven leading AI companies – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have voluntarily committed to adopting stringent safety, security, and transparency measures to ensure the responsible development of AI technology. The White House announced this significant milestone as part of its broader commitment to protect Americans from harm and discrimination in the rapidly evolving world of AI

OpenAI has published a job position for an Elections Lead on Public Policy. According to the post, the new hire would play a crucial role in upholding the security and integrity of democratic elections by shaping usage policies for OpenAI’s advanced AI tools and promoting industry best practices and regulations. The role would involve:

  • Working with in-house and external experts.
  • Participating in stakeholder engagement and problem-solving activities.
  • Supporting OpenAI leadership in various meetings related to both U.S. and international elections.

According to OpenAI’s posting, the role requires the handling of new challenges arising from the use of AI technologies in democratic processes and the evolving needs of the organization. The fascinating insight of the job posting is the mention of international elections. The company is clearly aware of how this technology could be used to sway democracy outside the United States. The hire would need a “sophisticated understanding of AI-related regulatory issues and processes, as well as the foundational principles and practices of democratic elections in the United States and globally.”

The focus on having talent within the company that is well versed in emerging AI technologies, especially in democratic politics and elections, is an eye opener of what they expect as a company in the following months.

AI & U.S. elections

Lt. Gen. Timothy Haugh, chosen by President Joe Biden to lead the NSA and Cyber Command, has warned that generative artificial intelligence technologies could pose significant threats to the upcoming U.S. presidential election. This comes at a time when House and Senate lawmakers are grappling with how to regulate and monitor these new AI technologies, especially following several attempts by foreign hackers to interfere in recent U.S. elections.

In his Senate Armed Services Committee nomination hearing, Haugh highlighted the potential misuse of generative AI in foreign interference attempts. The NSA and Cyber Command have been instrumental in detecting and thwarting threats to U.S. elections. For instance, during the 2018 midterms, Cyber Command successfully blocked a key Russian troll farm from spreading disinformation.

The advent of AI technologies has brought forth a myriad of challenges. Director Jen Easterly of the Cybersecurity and Infrastructure Security Agency, has recently voiced her apprehension about the “epoch-defining” risks that come with AI, including the potential for increased online disinformation. These risks pose significant threats to the integrity of our online environment, and it is imperative that we take proactive measures to mitigate them.

The future of political messaging

AI in upcoming elections could determine the future of political marketing. As Artificial Intelligence advances, its influence on democratic processes, especially elections, is becoming increasingly significant. This is both a promising and concerning development. AI can efficiently communicate with voters, facilitate political marketing, and analyze public sentiment. However, it can also become a potent vehicle for disinformation or fake news, a challenge already evident in the era of social media.

OpenAI recognizes these challenges and opportunities. It is gearing up to responsibly navigate this evolving landscape, as evidenced by its recruitment drive for policy staff and its voluntary commitments to stringent safety, security, and transparency measures alongside leading AI companies. The company is not only focusing on the potential impact of AI on U.S. elections but is also considering its potential effects on democratic processes globally. 

While lawmakers and security agencies are grappling with the implications of AI in upcoming elections, companies like OpenAI have a crucial role to play. Their proactive steps towards addressing AI’s possible threats and fostering its benefits are integral in protecting democratic processes in the rapidly evolving AI technologies era. In the face of these new challenges, the balance between innovation and regulation will be critical in shaping the future of democracy.