During the pause for developments in AI, the letter urges AI labs and independent experts to collaborate on developing and implementing a set of shared safety protocols for advanced AI design and development

artificial-intelligence-gf264f8aea_640

Elon Musk along with several AI researchers to pause AI development. (Credit: Brian Penny from Pixabay)

Tesla and SpaceX founder Elon Musk along with several artificial intelligence (AI) researchers and other prominent industry executives have signed an open letter that calls for an immediate pause on AI developments for at least six months. 

According to the signatories, the temporary halt on AI should involve all significant players and be made public and verifiable. 

They have also urged governments to step in and impose a moratorium if such a halt cannot be implemented immediately. 

During the pause for developments in AI, the letter urges AI labs and independent experts to collaborate on developing and implementing a set of shared safety protocols for advanced AI design and development. 

These developments in AI should be thoroughly audited and overseen by independent outside experts, stated the signatories. 

The protocols are expected to guarantee the systems that follow them are completely secure. 

The open letter reads: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?  

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?  

“Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.  

“This confidence must be well justified and increase with the magnitude of a system’s potential effects.” 

The signatories also stated that the AI developers are required to collaborate with policymakers to significantly expedite the development of strong AI governance systems. 

AI governance systems are expected to include new and capable regulatory authorities dedicated to AI as well as authenticity and watermarking systems to differentiate real from synthetic and to monitor model leaks, The systems should also be able to supervise and track highly capable AI systems and large pools of computational capability, said the signatories.