The Future of Life Institute has announced it will use a $10m (£6m) donation from billionaire entrepreneur Elon Musk to fund 37 research projects dedicated to keeping AI “beneficial”.
The projects include $136,000 for a study of artificial intelligence weapons and how to keep them under “meaningful human control”.
There is also a further $1.5m earmarked for an AI research centre.
Additional funding has come from the Open Philanthropy Project.
The centre would be run by Oxford and Cambridge universities in the UK.
“There are reasons to believe that unregulated and unconstrained development could incur significant dangers, both from “bad actors” like irresponsible governments and from the unprecedented capability of the technology itself,” said Oxford University’s Nick Bostrom.
“The centre will focus explicitly on the long-term impacts of AI, the strategic implications of powerful AI systems as they come to exceed human capabilities in most domains of interest, and the policy responses that could best be used to mitigate the potential risks of this technology.”
The projects now set to receive grants from the Future of Life Institute (FLI) include studies on how ethics and human values can be incorporated into AI work.
The group said it had received nearly 300 funding applications from around the world.
Microsoft founder Bill Gates and Prof Stephen Hawking are among the high-profile figures warning about the potential dangers of AI, as robots become increasingly intelligent and less dependent on human control.
FLI president Max Tegmark pointed out that the organisation was not concerned by the nightmare scenarios posed by Hollywood films such as Terminator.
“The danger with the Terminator scenario isn’t that it will happen, but that it distracts from the real issues posed by future AI”, he said.
“We’re staying focused, and the 37 teams supported by today’s grants should help solve such real issues.”
Elon Musk, who founded SpaceX and co-founded Tesla Motors and PayPal, donated to the IFL in January this year.
“Here are all these leading AI researchers saying that AI safety is important”, he said.
“I agree with them, so I’m committing $10m to support research aimed at keeping AI beneficial for humanity.”