What does it mean for something to be real? This seems like an obvious answer: things are real if I can see and feel them. My house is real. My cup of coffee is real. My trip to Greenland was real (although ill-advised). My computer is real. But are the things you see on your computer real? What does it mean for something to be data – does it exist in the obvious way that we think something is real? Can a computer have a mind that thinks and feels – and if so, is that mind real?
These might seem like esoteric questions, or even pedantic bickering, but the answers have real world applications. For instance, violent video game usage is often correlated with aggressive behavior. But does that make playing the violent video game itself immoral? If one takes a consequentialist view, then it is – things that cause people to do immoral things are immoral. But not all people who play violent video games do violent things – it is not a direct cause-and-effect scenario. To say this would be to take any sort of agency out of the person playing the game – their actions are merely an effect of external factors. But there are those who would wish to seek certain types of legislation to keep certain video games off the market, or at the very least, out of the hands of impressionable children.
But I want to step away from this consequentialist argument, because there is an equally interesting question: Can an act be immoral if it is simulated? Computer graphics, robotics, and artificial intelligence is starting to reach a point where it’s difficult not to get our empathy mixed up in the simulation itself. Some people can feel sadness or anger at inanimate objects to greater or lesser extents, depending on how “real” the simulation is. This means that we feel empathetic toward simulations. If we accept that much of our moral sensibilities stems from our ability to be empathetic (sociopaths are people who do not feel empathy, which is why they act immoral) then it is not a stretch to say that these simulations have some level of moral character. But what sorts of implications does it have? Consider the following trailer for the video game “Hatred” which has quite a few people in an uproar (video is NSFW):
In case you don’t want to watch the video (or don’t care to stomach it) in this game, you play a character who decides he hates and despises the world and everyone in it, and so goes on a “genocide crusade” to kill as many people as he can in as brutal a fashion as possible until finally being violently gunned down. Your goal in the game is to just kill innocent people in a violent, and serious, fashion until you are killed. A lot of arguments I’ve read why people hate this game but not something like Grand Theft Auto 5, where you can also just go around brutally killing innocent people, is that in “Hatred” is is the goal of the game to brutally gun down and stab innocent people, whereas in GTA5 it is the persons choice to do this and has nothing to do with the goals of the game (and in fact will hinder your ability to accomplish those goals). I think people are not wrong when they say that “Hatred” somehow has a lower moral character than “GTA5” (although it’s a different subject, which I’ll address at some point, whether a work of art can have a good or bad moral character), but I think they’re wrong in using this argument.
In “Hatred” the simulations are very realistic. The victims sob, cry for mercy, and ask why. They are generated to look very realistic. This makes committing violent acts against them look and feel very realistic. And yet, anyone who see’s the trailer, or plays the game, knows that those people are not real. They aren’t feeling any real pain. There is no actual life that is really being extinguished. None of those simulated people had an real hopes or dreams or memories or loved ones. They are pixels. And yet, even ignoring the consequentialist argument – people playing this game may be influenced toward aggressive behavior – there is something that feels morally wrong about this game.
So, what happens when simulated people become even more realistic? What happens when they are able to simulate more of a sense of agency? When, if left alone, they do simulate a life that seems very real, very personal, and even very conscious? Would it be immoral to make a game – or a virtual reality – where people can just go on a killing spree and brutally murder these simulations? Would that make those players (or game developers) actual murderers?
And what about simulating sex acts that we find immoral? It’s not far-fetched to think that robots that simulate sex will one day become very real – perhaps sooner than we think.
These things could become very realistic, and that doesn’t seem so bad. There are already such thing as sex dolls. Does it make a difference if the robot can simulate emotions? Desires? Or even fear and pain? And what happens if we make a sex robot that simulates a child in look, feel, and simulated emotions? We all know that the actual chassis itself is not a child, and that the simulation of a mind isn’t a thinking, feeling child that will have its life significantly altered by what someone does to it. So is it still immoral for someone who commit sexual acts against a robot that simulates the experience of a child?
These are certainly uncomfortable questions to consider, but I think with the accelerating pace of technology, they are questions that must be considered. And once these questions are considered, then there is always the hairsplitting about when a simulation of a person becomes something that is sentient, with its own first person subjective experience of the world. It’s amazing and fascinating that we now live in a world where such questions can be considered, but it’s equally as frightening, as we feel our way through Plato’s cave. I think these sorts of inquiries are going to be very real, possibly even within the next ten years. So, don’t shy away from the discomfort, because these things will come up, whether you pay attention to them or not.