G7 Research Group

Temporary home of the Global Governance Program
based at the University of Toronto

About the G7 Research GroupAbout the G20 Research Group

G20 Research Group

Italy's G7 2024 Presidency, AI Safety and the Debate on Its Future

Joanna Davies
July 2, 2024

G7 leaders met in Apulia on June 13-15, 2024, hosted by Italy whose G7 presidency lasts until December 31, 2024. The digital economy and artificial intelligence (AI) were high on the summit's agenda. Italian prime minister Georgia Meloni invited the Pope to participate in the G7's session on AI, signalling the importance of this issue and building on the Holy See's Rome Call for AI Ethics of 2020. In January this year the Pope called for putting "systems of artificial intelligence at the service of a fully human connection." In the lead-up to the Apulia Summit, Meloni stated her expectation that the Pope's presence would contribute to an ethical framework for AI.

From 1975 to 2023, the G7 dedicated just 2% of its communiqués to digitalization (including information and communications technology in its earlier summits). According to the G7 Research Group, it made 229 collectively agreed commitments on the subject, and complied with 17 of those commitments at an average of 85%. G7 leaders first addressed AI itself in a major way at their Charlevoix Summit in 2018, making 24 commitments on the issue. They are now increasing their attention to AI, releasing the G7 Leaders' Statement on the Hiroshima AI Process in October 2023, with the digital ministers following up in December with a comprehensive policy framework for it. Under Italy's presidency, in April 2024, the digital ministers reaffirmed this framework.

How can the G7 effectively address the major challenges of AI governance, not least AI safety, what can Italy as host contribute, and what can the G7 leaders do to improve their governance of AI beyond the 2024 presidency?

Challenges

In March 2023, Eliezer Yudkowsky, a computer scientist and researcher leading UC Berkeley's AI Safety team, called for a six-month moratorium on the development of AI systems more powerful than GPT-4. Published in TIME magazine, this plea came amid an intense AI arms race, with influential figures such as Elon Musk and Steve Wozniak among the signatories. Concurrently, OpenAI transitioned from a non-profit to a privately held entity, further intensifying discussions about AI governance.

Yudkowsky outlined several concerns about artificial general intelligence (AGI). First, he highlighted the challenge of predicting the future accurately, emphasizing that successful futurism depends on the predictability of certain events. Second, he pointed out the complexity of defining and achieving "goals" in AI. Goals are not merely preferences but include fundamental conditions such as consent, autonomy and the communal interests of others. This complexity introduces a layer of unpredictability.

The intentionality-goal issue leads us to a broader issue related to AGI: its efficiency compared to human cognition via neural nets and gradient descent. Yudkowsky discussed "epistemic efficiency," which is the inability to systematically predict biases in AI estimates (e.g., stock market), and "instrumental efficiency," which refers to the difficulty in perceiving better strategies relative to the AI's goals (e.g., playing stockfish chess). Efficiency finds itself on a gradient, since AGI is prompted by machine learning engineers to create "neural nets" (much like a brain), with different pathways across different layers of the machine that engage in supervised reinforcement training and, in some cases, unsupervised learning (unclosed loops using self-collected data). The concept of "smartness" in AI, often portrayed in science fiction, may not necessarily equate to making effective predictions and strategies. These discussions underscore the intricacies involved in defining and achieving decision-making criteria for machine learners: AGI relies on language games and related actions in reality. So, what are the criteria for "being good at making decisions" to a machine learner?

The potential threats posed by AGI are significant. During an April 2024 colloquium at New York University on techno-philosophy, Yudkowsky advocated for international cooperation to halt the AI arms race. He suggested that a minimum of three G20 members would need to agree to pause AGI development to manage the risks effectively. The United Kingdom took a step in this direction by hosting the first Global AI Safety Summit in November 2023, where world leaders and major tech companies reached an initial agreement on AI's future. Following this inaugural meeting, South Korea set up a mini AI safety summit this May. It aimed at creating guardrails for the rapidly advancing technology that raises concerns such as algorithmic bias that skews search results and potential existential threats to humanity. In March 2024, the United Nations General Assembly approved its first resolution on AI, lending support to an international effort to ensure the powerful new technology benefits all countries, respects human rights and is "safe, secure and trustworthy."

A study by the International Monetary Fund sparked further debate on AI's labour market impact, highlighting significant differences in AI access and readiness between advanced and developing economies. Routine jobs are at higher risk of being replaced by AI, while high-value jobs could benefit from increased productivity.

Italy's G7 Presidency and AI Initiatives

Since the G7's Hiroshima Summit in May 2023, Italy has taken several key AI safety initiatives throughout its G7 presidency, which began on January 1, 2024:

  1. National AI Strategy and Governance: Italy established a national AI strategy governed by a committee of experts to ensure responsible AI development. This strategy supports research, experimentation and ethical AI production.
  2. Multilateral Forums and Cooperation: Italy has prioritized AI safety, organizing forums to address AI's risks and opportunities. Meloni has engaged with global leaders to emphasize international cooperation on AI governance.
  3. Ethical AI Development: Meloni has stressed the importance of developing AI ethically, focusing on human rights and needs, and warned against AI supplanting human intelligence and advocated for a human-centred approach.
  4. Investment in AI Technology: Italy has invested significantly in AI, including a €1 billion investment by the Cassa Depositi e Prestiti's Venture Capital group to enhance competitiveness and resilience.
  5. Promotion of STEM Education: Meloni has emphasized improving education in technical and scientific disciplines to prepare for future AI developments and ensure inclusive innovation.
  6. Focus on AI's Social Impact: Italy has recognized AI's social impact, particularly on elder care. Initiatives have aimed to explore AI solutions to support caregivers, focusing on privacy, security and age-friendly environments.
  7. Global Regulatory Framework: Italy has advocated for a global regulatory framework for AI to ensure equitable opportunities and mitigate risks, highlighting the importance of global cooperation in addressing AI-related challenges.

Meloni's government, described as the "most right-wing" in Italy since World War II, has taken a conservative yet proactive stance on AI safety and tech governance. In April 2024, it passed laws punishing the distribution of AI-generated content with up to five years in prison. Italy has also led efforts to create a global framework to protect the workforce, including hosting a symposium at the Italian embassy in Washington DC titled "AI and Human Capital," focusing on AI's impact on the labour market.

Meloni's cautious approach reflects the concerns of Italian academics about the societal and philosophical implications of technological advancement.

Contrasting Views on AI's Future

Unlike Yudkowsky, Scott Aronson, a computer scientist at OpenAI, is less concerned about AI's potential dangers. He believes AI will transform civilization through tools and services that cannot plot to annihilate humanity, similar to how Windows 11 or the Google search bar operate. However, Aronson acknowledges a high probability of existential catastrophe in the coming century due to various factors including AI but also, notably, climate change and nuclear war. He expects AI to be intricately woven into all aspects of human civilization, affecting political processes and societal functions, as evidenced by AI's influence on the 2016 US election through the Facebook recommendation algorithm.

In conclusion, the debate on AI safety continues to evolve, with varying perspectives on its risks and benefits. Yudkowsky calls for precautionary measures and international cooperation, and others such as Aronson view AI's integration into society as inevitable and potentially beneficial. The global community, led by initiatives from countries including Italy, is recognizing these challenges through strategic investments, ethical considerations and regulatory frameworks.

To address these challenges further, G7 leaders – under Italy's 2024 presidency and looking ahead to the 2025 summit that Canada will host in Kananaskis – should take the following actions.

  1. Create an AI safety and security board. The majority of tech companies are based in the United States; in April 26, 2024, the US government released a press statement titled "Over 20 Technology and Critical Infrastructure Executives, Civil Rights Leaders, Academics, and Policymakers Join New DHS Artificial Intelligence Safety and Security Board to Advance AI's Responsible Development and Deployment." Among these, were tech leaders including the CEOs of Open AI, Microsoft, NVIDIA, IBM and more. Security leaders encouraged its establishment primarily on account of AI's cybersecurity threats to the US Treasury. The G7 should focus on establishing fair and equitable AI globally through knowledge transfer, cross-border data flow, and the improvement and support of broadband access. The board should undertake several activities, including:
    1. Produce a comprehensive definition of AI. In the financial sector, there is a lack of consistency when defining what AI is. Companies, moreover, are using AI defensively. There is also little understanding on how the increasing adoption of AI is posing threats to the labour market or creating opportunities. The G7 should therefore work to create clarity for organizations, regulators and clients: it should create and adopt a common AI lexicon as part of its AI safety and security board. It should also have a consistent and sectoral and cross-sectoral view of AI's (limited) utility.
    2. Improve access to AI for small businesses, which are most at risk of cybersecurity threats due to a lack of expertise.
    3. Ensure developing countries are not left out of the AI race: AI governance in the Global South poses challenges, especially since broadband access needs improvement in large parts of the world. The internet penetration rate in Africa is 43%, and only 14% of households in Mauritania have internet access. The Brookings Institute has said that: "governments in the Global South must invest in local researchers and integrate digital skills training into education curricula, and countries that have dominated the AI conversation must include Global South countries in roundtables and advisory bodies."
  2. Create a data network that both allows and requires AI tech companies access to information from around the world. There are many countries, and people within countries who are not represented by the data used to train AI, creating harmful algorithmic biases. In a world where technological corporations hold unprecedented power and influence, technology that is developed with Western perspectives, values and interests is imported to regions such as Africa with little regulation or critical scrutiny. This has been described as "algorithmic colonization." The G7 should take measures or create policy that prevents the development and procurement of biased AI in the public domain.
  3. Encourage and create transparency between AI companies as well as its democratization. OpenAI, for example, should work in the interest of all: AI companies creating generative AI should not be able to privatize knowledge commons – hide public information behind a paywall – or fudge their data to mislead regulators.
  4. Ensure AI is not used to exploit privacy or to manipulate. Companies have been able to access public information for profit. BloombergGPT is Bloomberg's 50-billion parameter large language model, purpose-built from scratch for finance. Corporations may offer services that are expensive to develop but enable them to dominate entire industries and markets. For instance, trading strategies based on financial data or risk models created from healthcare data have become so valuable to finance and insurance companies that collaborating with major AI models appears to be the only viable way to remain competitive. This type of buy-out, synergy or strategy results in the concentration of knowledge, power and/or capital in only a few corporations. The G7 should seek to have countries engage in honest, human-centred business when using AI. It should ensure that legal frameworks that protect the public and public goods are not undermined or evaded: antitrust laws in North America, data protection laws and AI regulation in the European Union, tax regimes and copyright laws everywhere are still under-developed and unsophisticated.
  5. Avoid epistemic protectionism: the widespread availability of AI to perform most tasks might be a problem for cognitive development. As AI becomes more widespread, there are fewer grounds for the epistemic justification of performing a task using tools such as AI, even for creative tasks. Humans have a tool bias, especially as it relates to saving time (the perceived efficiency of a given tool), which means that they are likely to use the tool instead of performing cognitive processes internally (such as doing a mathematical calculation). The lure and ubiquity of "efficient" AI, particularly generative AI, might force people to "concede" parts of their cognitive processes to be carried out by AI. This might affect, inter alia, a person's cognitive development and adaptability or flexibility in low-tech contexts. Furthermore, AI can, on the level of its existence, affect dispositional beliefs about one's own abilities, as well as one's ability to reflect on one's own actions (metacognition). There is a long-standing, widespread debate on the attention economy and the digital world, as well as on the utility of the internet for the betterment of all aspects of human life. The benefits have been found to outweigh the costs. For the most part, users are not aware of the health effects of their daily use of technological objects, particularly AI. The use of AI or supercomputers is, in and of itself, a public health issue that should be examined on a clinical level. Public health concerns, especially regarding children's access to tools that allow them to circumvent or supplant the development of certain skills in favour of the skills used to benefit from highly advanced tools, should be addressed. The G7 should encourage funding national and international research into the health effects of widespread AI and supercomputers (such as smartphones) in order to regulate their personal use, as well as their sectoral use. This type of clinical time-series study should include understanding the current use of AI and supercomputers in developing countries.

[back to top]

This Information System is provided by the Global Governance Program,
which includes the G20 Research Group and G7 Research Group
based at the University of Toronto.
   
Please send comments to:
g7@utoronto.ca
g20@utoronto.ca
This page was last updated July 02, 2024 .