Description
Due to each generation of Artificial Intelligence being more capable of self-learning and self-programming, people have begun to fear what will happen when we successfully create an Artificial Intelligence that can increase its intelligence to the point of being super-intelligent. Along with this fear has come the question of how we go about controlling this super-intelligent Artificial Intelligence when it comes into existence. This thesis will examine why this fear of super-intelligent Artificial Intelligence exists and, in the process, delve into the assumption that when it comes to controlling a super-intelligent Artificial Intelligence, one does not need to consider it a moral agent. Based on the outcome of this inquiry, this thesis concludes that since said assumption is unjustified, any future research into the controlling of super-intelligent Artificial Intelligence needs to consider super-intelligent Artificial Intelligence as a moral agent.