Life Coach

Dr Peter L Nelson

AI & Conscience

Can intelligent machines have a conscience?
Artificial intelligence (AI) is produced by a computational simulation of some behavioral aspects of what a human central nervous system appears to do. One such function of our nervous system is experienced by us as consciousness. This is not to assert that consciousness is necessarily generated by the brain—it could, after all, be ‘received’ by the brain (as a tuned detector) and consciousness would still appear to be one of the brain’s innate functions. Whatever the case, we can safely say, “No nervous system, no conscious experience for that person, animal, thing or whatever.” Although AI can appear to behave as if it is conscious, it remains a simulation without any sign of the direct experiential, felt knowing philosophers call qualia.

When we speak of having a conscience, we do not imagine such behavior without it being one of the functions of an actual conscious being. No doubt we could build a machine that manifests a kind of rule-driven behavior reminiscent of a being with a conscience, but as soon as it encounters what we think of as a moral dilemma, we would immediately see that it is not a living conscience as we usually understand it.

For example, one of our rules for moral behavior in the Judeo-Christian world is, “Thou shalt not kill.” For most of us, we would identify our conscience as an internal ‘voice’ or compulsion that prevents us from killing the neighbor’s dog as it defecates on our front lawn yet again. However, when we are faced with a crazed person about to kill a child, and only by killing the maniac before he has a chance to murder the child can we stop it, we’re thrown on to the horns of a moral dilemma. Our conscience screams at us, “killing is wrong,” but our failure to act may lead to the death of an innocent child.

The above is an extreme case, but there are many such moral dilemmas that we face throughout life—from trivial events to those on a grander scale. How would an advanced AI robot, manifesting an ability to reflect on itself (report on its ‘inner’ workings as described by Prof Michael Graziano) and thereby appear to have consciousness, handle moral dilemmas? I would argue that any programmed set of rules for determining whether or not we kill the potential killer would not be the result of choice born of conscience, because there is no qualia, or experience of a direct felt knowing of the value and meaning of the lives involved. Underlying that absence of direct knowing is the lack of any emotional empathic connection between such a machine and people, or other machines, for that matter.

Even an artificial intelligence that has built its rules through the use of advanced neural-network learning, is still a rule machine (albeit a more nuanced one) and not acting from self-reflected, felt knowing that suggests a conscious being who perceives the world through the context of meaning and values derived from an empathic connection. What the machine is doing, after all, is a computational simulation of what is actually a different order of being—a living, biological being—and, as an AI simulation, it is not doing what that being does. It is a behavioral imitation only.

When that day comes that an AI machine turns to me spontaneously and expresses a direct knowledge of what I’m feeling without me producing any overt physical or verbal cues, then I will start to trust its conscience as a worthy moral guide.