Does AI need Ethics?
Over the last couple of years, we've been hearing a lot about the tremendous progress humanity has made in creating "artificially intelligent" computers.
While a lot of progress has indeed been made, much of the commentary is off the mark because it fails to distinguish between the different types of AI and the progress made with each type.
There are fundamentally two different types of AI – Weak AI and Strong AI.
Weak AI is about creating computers that are oriented to achieving specific goals. Weak AI is currently an evolution of the data mining / predictive analytics efforts of the early 2000’s. In Weak AI, various types of statistical and heuristic algorithms (including deep learning neural nets) are used to evaluate voluminous amounts of data to achieve a specific goal.
The availability of inexpensive and flexible cloud computing services has enabled a staggering amount of progress in creating computers that can do things like evaluate loan applications, target advertising, understand human languages, recognize faces in images, and even drive cars. Given the amount of investment in Weak AI, the pace of advancement will certainly accelerate over the next few years.
Strong AI is about creating computers that can reason generally about the world around them like a human being. As the MIT Technology Journal recently reported, progress in (strong) AI isn’t as impressive as one might think. Computers aren’t even close to being able to reason generally like a human. In fact, we don’t yet even know what it would mean to create a computer that can reason generally like a human. Very little investment is going into Strong AI (compared to the investment in Weak AI). So, progress is likely to be slow in the foreseeable future.
Many commentators in the popular press discuss AI as if we are on verge of creating generally intelligent machines (Strong AI) that will require some level of ethics to operate in the real world without endangering humans. While Strong AI that can reason generally like a human is certainly not on the immediate horizon, it is valid to ask whether or not Weak AI needs some level of ethics.
There are situations where Weak AI needs something that looks like ethics. But in most cases what is needed is specifically targeted guidance, not general ethics. The best example that I have seen is what a self-driving car would do if a pedestrian unexpectedly walked in front of it on a bridge. Let’s say that the car determines that it can only take one of two actions – run over the pedestrian or steer the car off of the bridge, sacrificing its passengers. How does it determine which action to take? Which action is the "ethical" one? I certainly would want to know what guidance the car has been given before I agree to ride in it. But this is not a general ethics problem, it’s a situational issue that can be dealt with in a specific way.
We are truly living in a wondrous time. Inexpensive cloud computing platforms are giving rise to Weak AI systems that can achieve specific goals that were the stuff of science fiction a few short years ago. But the world is not on the verge of creating Strong AI systems that can reason generally about the world like a human. Hence, the need to teach a computer about ethics is not as urgent as many writers in the popular press would have us believe.