G7 Research Group

Temporary home of the Global Governance Program
based at the University of Toronto

About the G7 Research GroupAbout the G20 Research Group

G20 Research Group

The G7's Response to AI Governance: A Rapid Reaction

Jessica Rapson, G7 Research Group
June 14, 2024

Artificial intelligence (AI) governance has swiftly risen to the forefront of global policy discussions, prompting an accelerated and unified response from the G7 leaders. This issue, which seemingly emerged overnight, was first prominently featured on the G7 agenda at the Hiroshima Summit last year.

In December 2023, G7 leaders endorsed the Hiroshima AI Process, marking a significant milestone as the first successful international framework for governing advanced AI. This framework specifically targets advanced foundation models and generative AI, emphasizing that organizations should not develop or deploy AI systems in ways that undermine democratic values, are harmful to individuals or communities, facilitate terrorism, enable criminal misuse or pose substantial risks to safety, security and human rights. Despite these strong statements, the binding language of the framework is relatively weak, with no specific commitments to mobilizing funds for AI safety research, security implementation or leveraging AI to address global challenges. The framework suggests implementing security controls but does not specify what these should be.

The most concrete aspects of the Hiroshima AI Process include transparency reporting, where organizations are expected to disclose their AI governance policies, and content authentication measures, such as using watermarks on AI-generated content. However, much remains to be done to solidify these commitments into actionable and enforceable policies.

What to Expect on AI Governance from the Apulia Summit?

As the G7 transitions to the Apulia Summit, the focus on AI governance is expected to continue. Founded in 1975 as a primarily economic institution, the G7 has historically prioritized trade and economic competitiveness. Therefore, it is likely that the Apulia Summit will emphasize ensuring the dominance of G7 economies in AI-related innovation, particularly in the face of rising competition from China.

One area of focus will likely be the diversification of integrated circuit supply chains, given that much of the world's chip production currently takes place in Taiwan and Korea. Alongside this, there will be pressure to restrict Chinese competitiveness in the AI sector. Although economic priorities are expected to dominate, excluding national security risks, the integration of safety concerns within these economic discussions will be crucial.

Why Is the Pope Speaking About AI at the Apulia Summit?

For the first time ever, the Pope will participate in a G7 summit, addressing the implications of artificial intelligence for Catholic doctrine. This unprecedented involvement highlights the significance of AI beyond economic and political realms, touching on moral and ethical considerations. Pope Francis has previously expressed concerns about the role of machines in society, stating that "wisdom cannot be sought from machines" on World Communications Day. His participation could signify the Catholic Church's broader opposition to the concept of general AI, especially the use of the term "intelligence" to describe machines, aligning with his views on the moral and philosophical implications of AI.

Challenges in AI Governance for the G7

The governance of AI presents several complex challenges for the G7:

  1. Cross-sectoral impact: AI governance intersects with numerous sectors and industries, including employment, national security, intellectual property and the environment.

  2. Pace of innovation: The rapid pace of AI innovation is likely to continue outstripping the speed at which regulatory responses can be developed and implemented by G7 members.

Steps G7 Leaders Can Take to Ensure the Success of Apulia Summit Commitments

To enhance the effectiveness of AI governance commitments made at the Apulia Summit, G7 leaders can take several strategic steps:

  1. Refer to specific international organizations and regulatory bodies: Including organizations such as the Global Partnership for Artificial Intelligence, the International Telecommunications Union and relevant national organizations capable of implementing the policies discussed in the Hiroshima AI Process will provide a clear pathway for action.

  2. Make specific monetary commitments: Committing specific funds to AI safety research, security implementation and leveraging AI to address global challenges will ensure that the frameworks established are not merely aspirational but are backed by tangible resources.

By addressing these considerations, the G7 can build on the momentum of the Hiroshima AI Process and pave the way for robust and effective AI governance that balances innovation with ethical and safety concerns.

[back to top]

This Information System is provided by the Global Governance Program,
which includes the G20 Research Group and G7 Research Group
based at the University of Toronto.
   
Please send comments to:
g7@utoronto.ca
g20@utoronto.ca
This page was last updated June 17, 2024 .