International AI summit/ global AI safety meeting/ AI regulation conference/ AI safety collaboration/ Newslooks/ San Francisco/ J. Mansour/ Morning Edition/ The Biden administration will host an international AI safety meeting in San Francisco on November 20-21, 2024. Government scientists and AI experts from nine countries and the European Union will collaborate on managing AI risks and advancing safety standards. This gathering follows previous summits in the UK and South Korea, marking a step toward global AI cooperation.
International AI Safety Summit: Quick Looks
- Meeting Date: Scheduled for November 20-21, 2024, in San Francisco, with global AI experts attending.
- Focus: Collaboration on safe AI development, managing synthetic content, and addressing malicious uses of AI.
- Countries Involved: U.S., UK, Canada, France, Japan, Australia, South Korea, and the European Union.
- Previous Summits: Follows earlier meetings in the UK and South Korea to build a global AI safety network.
- Broader Summit: Leads up to a larger AI summit planned for February 2025 in Paris.
Biden Administration Plans International AI Safety Meeting in November
Deep Look:
The Biden administration is preparing to host a significant international summit on AI safety in San Francisco, set for November 20-21, 2024. The gathering will include AI experts and government representatives from nine countries, along with the European Union, all focused on coordinating efforts to safely develop artificial intelligence technology while mitigating its potential dangers.
This two-day summit will follow the U.S. elections in November and is seen as a continuation of the global dialogue initiated during the AI Safety Summit in the United Kingdom in 2023. That summit resulted in various pledges from world leaders to work together to address the potential risks posed by the rapid advancements in artificial intelligence, particularly the use of AI in malicious or harmful ways.
U.S. Commerce Secretary Gina Raimondo emphasized the importance of this meeting, calling it the “first get-down-to-work meeting” following earlier discussions in the UK and a subsequent May summit in South Korea. She explained that the meeting will focus on setting global standards to address key risks associated with AI-generated content and other harmful applications of AI technology. “We’re going to think about how to work with countries to set standards as it relates to the risks of synthetic content, the risks of AI being used maliciously by bad actors,” Raimondo said.
San Francisco, a global hub for AI innovation and home to leading companies like OpenAI, was chosen as the site for this critical meeting. Its proximity to the developers driving much of the recent progress in AI makes it a fitting location for technical discussions on AI safety measures. The meeting will occur two weeks after the U.S. presidential election, in which Vice President Kamala Harris, who played a key role in shaping the U.S. government’s AI policies, faces off against former President Donald Trump, who has expressed opposition to Biden’s AI strategies.
Raimondo, alongside Secretary of State Antony Blinken, will co-host the San Francisco summit, tapping into a growing network of AI safety institutes formed in the U.S., UK, and other countries such as Australia, Japan, and France. The 27-member European Union will also be part of the talks, although the summit’s most notable absentee will be China, which has yet to join these international AI safety discussions. Raimondo hinted that discussions are still ongoing regarding additional participants.
AI regulation has become a key issue worldwide as governments scramble to ensure the technology is developed and deployed safely. Different countries have taken varying approaches, with the European Union leading the way with the world’s first comprehensive AI law that imposes restrictions on the most dangerous uses of the technology. Meanwhile, President Biden signed an executive order on AI in October 2023 that requires developers of the most advanced AI systems to share safety test results and other information with the U.S. government before release.
The upcoming AI summit is also expected to tackle the growing challenge of AI-generated deepfakes, especially as political tensions rise before major elections. The potential for AI to be used in spreading misinformation or influencing elections is a concern shared by many governments, including the U.S. and the European Union. California Governor Gavin Newsom recently signed bills aimed at combating political deepfakes ahead of the 2024 election, underscoring the urgency of addressing AI’s role in political interference.
One key company in this conversation is OpenAI, headquartered in San Francisco, which developed the famous AI chatbot ChatGPT. OpenAI has been actively collaborating with AI safety institutes, granting early access to its latest AI model, o1, to the U.S. and UK safety researchers. This new model goes beyond previous capabilities, able to perform complex reasoning and produce long chains of internal thought when answering questions. OpenAI classified the o1 model as posing a “medium risk” in the category of weapons of mass destruction, highlighting the ongoing need for AI safety oversight.
While AI companies like OpenAI generally agree on the need for regulation, they have also voiced concerns that overly restrictive policies could stifle innovation. Despite these tensions, the Biden administration has continued to push for voluntary commitments from AI developers to test and evaluate their models’ safety before public release. Raimondo acknowledged that while voluntary measures are a step in the right direction, mandatory regulations may be necessary in the future, and she called on Congress to take action to formalize AI safety standards.
The San Francisco AI safety summit in November will serve as a pivotal moment for international cooperation on AI regulation, laying the groundwork for a broader global summit set to take place in February 2025 in Paris. As the world grapples with the dual challenges of reaping the benefits of AI while managing its risks, the summit will be a key step in ensuring that AI development proceeds safely and responsibly.
You must Register or Login to post a comment.