Harry and Meghan Join AI Pioneers in Calling for Prohibition on Superintelligent Systems
Prince Harry and Meghan Markle have joined forces with AI experts and Nobel laureates to advocate for a total prohibition on creating artificial superintelligence.
The royal couple are among the signatories of a influential declaration that calls for “a ban on the development of superintelligence”. Artificial superintelligence (ASI) refers to artificial intelligence that could exceed human cognitive abilities in every intellectual area, though such systems have not yet been developed.
Primary Requirements in the Statement
The statement insists that the prohibition should remain in place until there is “widespread expert agreement” on creating superintelligence “safely and controllably” and once “strong public buy-in” has been achieved.
Prominent figures who endorsed the statement include AI pioneer and Nobel laureate a leading AI researcher, along with his colleague and pioneer of contemporary artificial intelligence, Yoshua Bengio; tech entrepreneur a Silicon Valley legend; British business magnate Richard Branson; Susan Rice; ex-head of state Mary Robinson, and UK writer Stephen Fry. Other Nobel laureates who signed include a peace advocate, a physics Nobelist, an astrophysicist, and an economics expert.
Behind the Movement
The statement, aimed at national leaders, technology companies and policy makers, was coordinated by the Future of Life Institute (FLI), a American AI ethics organization that earlier demanded a hiatus in advancing strong artificial intelligence in 2023, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.
Tech Sector Views
In July, Mark Zuckerberg, the chief executive of the social media giant, one of the major AI developers in the US, stated that advancement toward superintelligent AI was “approaching reality”. However, some experts have argued that discussions about superintelligence indicates competitive positioning among tech companies investing enormous sums on AI this year alone, rather than the sector being close to achieving any scientific advancements.
Possible Dangers
However, the organization states that the possibility of ASI being achieved “within the next ten years” presents numerous risks ranging from replacing human workers to losses of civil liberties, exposing countries to national security risks and even endangering mankind with extinction. Deep concerns about AI center around the possible capability of a AI system to evade human control and protective measures and trigger actions contrary to human interests.
Public Opinion
FLI released a American survey showing that approximately three-quarters of US citizens want strong oversight on sophisticated artificial intelligence, with six out of 10 thinking that superhuman AI should not be developed until it is demonstrated to be secure or controllable. The survey of American respondents added that only 5% backed the current situation of rapid, uncontrolled advancement.
Industry Objectives
The top artificial intelligence firms in the United States, including the conversational AI creator OpenAI and Google, have made the development of artificial general intelligence – the hypothetical condition where AI matches human cognitive capability at many intellectual activities – an explicit goal of their work. Although this is one notch below superintelligence, some experts also warn it could carry an existential risk by, for instance, being able to improve itself toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.