Musk on AI safety before xAI in 2019

MUSK: I think there is a lot—a tremendous amount of investment—going on in AI. Where there’s a lack of investment is in AI safety.

And there should be, in my view, a government agency that oversees anything related to AI to confirm that it does not represent a public safety risk. Just as there is a regulatory authority for this—like the Food and Drug Administration, there is NHTSA for automotive safety, there’s the FAA for aircraft safety, which generally comes to the conclusion that it is important to have a government referee, or a referee that is serving the public interest in ensuring that things are safe when there’s a potential danger to the public.

I would argue that AI is unequivocally something that has potential to be dangerous to the public, and therefore should have a regulatory agency, just as other things that are dangerous to the public have a regulatory agency. But let me tell you the problem with this: the government moves very slowly. And the rate… usually, the way a regulatory agency comes into being is that something terrible happens, there’s a huge public outcry, and years after that, a regulatory agency or a rule is put in place.

Take something like seatbelts. It was known for a decade or more that seatbelts would have a massive impact on safety and save so many lives and serious injuries, and the car industry fought the requirements to put seatbelts in tooth and nail. That’s crazy. And hundreds of thousands of people probably died because of that. They said people wouldn’t buy cars if they had seatbelts, which is obviously absurd, you know?

Or look at the tobacco industry and how long they fought anything good being said about smoking. That’s part of why I helped make that movie, Thank You for Smoking. You can sort of see just how pernicious it can be when you have these companies effectively achieve regulatory capture of government—the bad.

People in the AGI community refer to the advent of digital superintelligence as “the singularity.” That is not to say that it is good or bad, but that it is very difficult to predict what will happen after that point. And that there’s some probability it will be bad, some probability it will be good. But obviously, you want to deflect that probability and have it be more good than bad.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.