_
Elon Musk was once on the forefront of artificial intelligence: Back in 2015, he invested $50 million into OpenAI, the company that eventually developed ChatGPT. The Tesla founder, however, believes that AI has gotten a little too out of hand and is now an "existential risk." The billionaire tech mogul painted a dark picture while discussing new AI technologies during the U.K.'s AI Safety Summit arranged by Prime Minister Rishi Sunak.
Over the past year, Elon has become more and more critical of AI, and in March 2023, he even called for a pause in the creation of giant AI "digital minds."
Click through (if you dare) to see what he's saying about AI now…
MORE: Follow Wonderwall on MSN for more fun celebrity & entertainment photo galleries and content
_
"I think [artificial intelligence] is one of the biggest threats (to humans)," Elon Musk said at the U.K.'s AI Safety Summit. "We have for the first time the situation where we have something that is going to be far smarter than the smartest human. We're not stronger or faster than other creatures, but we are more intelligent, and here we are for the first time, really in human history, with something that is going to be far more intelligent than us."
The problem, Elon says, is that the AI train might be too far down the tracks.
_
Despite Elon Musk's grim outlook on artificial intelligence, he's not giving up on reigning in AI.
"It's not clear to me if we can control such a thing, but I think we can aspire to guide it in a direction that's beneficial to humanity," he said. "But I do think it's one of the existential risks that we face and it is potentially the most pressing one if you look at the timescale and rate of advancement."
Elon argues that there needs to be a "third-party referee" to monitor technology, and he hoped there would be "international consensus" about that belief.
_
Elon Musk told ITV News the summit was a "step in the right direction" and that fearing AI "a little bit" is "probably wise."
"My personal opinion is that AI is 80% likely to be beneficial, that last 20% dangerous," he said. "This is obviously speculative at this point but if we hope for the best and prepare for the worst, that seems like the wise course of action."
"The very worst could be extremely bad but I think the probability of extremely bad is low… Any powerful new technology is inherently a double-edged sword, so we just want to make sure the good edge is sharper than the bad edge," he added.
_
In April, Elon Musk said artificial intelligence was a "danger to the public."
"I think we'll have a better chance of advanced AI being beneficial to humanity in that circumstance," he told Fox News. "AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production. In the sense that it has the potential — however small one may regard that probability — but it is non-trivial and has the potential of civilization destruction."
"I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe," he added.