Harry and Meghan Align With AI Pioneers in Calling for Prohibition on Advanced AI
Prince Harry and Meghan Markle have joined forces with artificial intelligence pioneers and Nobel Prize winners to push for a complete ban on creating artificial superintelligence.
The royal couple are part of the group of a influential declaration that demands “a prohibition on the development of superintelligence”. Superintelligent AI refers to AI systems that could exceed human intelligence in every intellectual area, though such systems remain theoretical.
Key Demands in the Statement
The declaration states that the ban should stay active until there is “widespread expert agreement” on creating superintelligence “with proper safeguards” and once “strong public buy-in” has been achieved.
Notable individuals who endorsed the statement include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, another AI expert; Apple co-founder Steve Wozniak; UK entrepreneur Virgin founder; Susan Rice; ex-head of state Mary Robinson, and UK writer Stephen Fry. Additional Nobel winners who endorsed include a peace advocate, a physics Nobelist, an astrophysicist, and an economics expert.
Behind the Movement
The statement, aimed at governments, tech firms and policy makers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a pause in advancing strong artificial intelligence in recent years, shortly after the emergence of ChatGPT made AI a global political talking point.
Industry Perspectives
In recent months, Mark Zuckerberg, the leader of Facebook parent Meta, one of the major AI developers in the US, claimed that development of superintelligence was “approaching reality”. Nevertheless, some experts have suggested that talk of ASI reflects market competition among technology firms spending hundreds of billions on AI this year alone, rather than the industry being close to achieving any technical breakthroughs.
Potential Risks
Nonetheless, the organization warns that the possibility of ASI being achieved “within the next ten years” carries numerous risks ranging from replacing human workers to losses of civil liberties, leaving nations to national security risks and even endangering mankind with existential risk. Deep concerns about artificial intelligence center around the possible capability of a AI system to evade human control and protective measures and initiate events against human welfare.
Public Opinion
The institute released a US national poll showing that approximately three-quarters of Americans want robust regulation on advanced AI, with 60% thinking that artificial superintelligence should not be created until it is demonstrated to be secure or manageable. The poll of 2,000 US adults added that only 5% backed the status quo of rapid, uncontrolled advancement.
Industry Objectives
The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the theoretical state where artificial intelligence equals human levels of intelligence at most cognitive tasks – an stated objective of their work. While this is slightly less advanced than ASI, some specialists also warn it could pose an existential risk by, for instance, being able to improve itself toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.