When I saw Blade Runner (by Ridley Scott) back in 1982, 2019 was almost forty years away, and at 19 no one seriously thinks of how it will be in our fifties – what a nightmare it would be! In fact, it has often been said that man’s worst nightmare would be to know his future, the date of his death; still, we keep “peeping” into the future and trying to guess, if not that fateful day, at least a few of the changes the future will bring.
I have always been a fan of science fiction, having read many “serious” books about a possible future, such as Brave New World by Aldous Huxley; 1984 by George Orwell and more recently The Handmaid’s Tale by Margaret Atwood; I have also enjoyed more “imaginative” stories about strange planets, intelligent apes (The planet of the apes) and deadly, monstrous creatures such as those depicted in H. G. Wells’ The war of the worlds and of course Stanley Kubrick’s mysterious, fascinating, troubling 2001: A Space Odyssey about Artificial Intelligence. Not to mention my cult movie Alien by Ridley Scott and those that followed.
Still, I might say my favourite sci fi movie ever has always been Blade Runner. It was definitely science fiction, as the action took place in 2019, but then it also had romance, action, and a pessimistic view of a distant city of Los Angeles with huge buildings, almost constant rain or drizzle that made us think of pollution and neon lights depicting an Asian woman advertising some product. This image was so strong that it has stayed with me, and some years ago when I went to La Défense, the suburban business district of Paris, my comment was “It looks like the city of Blade Runner without the neon lights!” Needless to say I hated it as strongly as I love Paris.
Over the years I only saw the movie again once or twice, but last year I wanted to see it again before the sequel came out, so that I would remember all the details that inevitably fade out with the passing of time; and so I did, and I marvelled at how some of the predictions for 2019 had been so accurate – or almost – while in some other cases even the wildest imagination had not been enough. I can give you some examples: intelligent robots identical to humans – only discernible through deep analysis of their eyes – are not available yet but will be in a matter of years; just remember Sophia, the robot that was presented last year at the Web Summit in Lisbon causing some commotion; flying cars are also not yet a reality but are being tested by Uber and Tesla; still, as I watched the movie I couldn’t help laughing when I saw the character played by Harrison Ford with some paper photographs on his hand and actually realised it never crossed the mind of the scriptwriters that before 2019 no one would be looking at actual photos anymore, but using a phone to take and keep all the photos! Smart phones have been such an incredible invention, such a breakthrough that no one could predict the extent to which they are now used. Most communicating devices depicted in the movie were really no more than seemingly sophisticated walkie talkies, but in this aspect reality has surpassed sci fi by far.
The temptation to play God
As for the rest – Artificial Intelligence (AI) is already a reality. I’m quite sure Sophia and her brothers and sisters cannot have feelings yet, but the challenge remains for humans to play God, to create another being at their image, a being that will be a true replica of ourselves. It’s scaring, in fact, but I believe the temptation will be too strong. Of the possibly two main characteristics of God we are about to master the first – creating life similar to ours, and we have already made some experiences in that area, such as cloning. As for the other, avoiding death still seems very far, even if we are prolonging life – not always with quality, unfortunately, but simply for the sake of it. AI brings many challenges, and one of the most important is certainly that of ethics. For my work I have read many articles about autonomous vehicles, and one of the issues facing producers is that of the moral dilemma: how will you teach a machine to make a choice based on ethical issues? For instance, will you programme the car to save the life of its owner instead of that of a pedestrian? Or in case it is faced with the choice between running over an elderly lady or a child, which to choose? Just the other day we all read that a pedestrian was run over by an autonomous vehicle that was being tested by Uber. That pedestrian was not crossing on the sidewalk and probably the car was not programmed to expect it?! Can machines be taught to expect the unpredictability of human beings? Again we come back to Blade Runner and to machines that have human-like feelings, and of course the most human of all emotions – the fear of death.
After seeing “Blade Runner” I went to the movies to see the sequel Blade Runner 2049. For us humans in 2017, I feel it doesn’t show a future as surprising as that of 2019 back in 1982. The dark, depressing, lonely, dry world of 2049 doesn’t seem so impossible now. Maybe the only impossible thing is really the fact that a human and a replicant (the very “human” robots in the movie) may actually conceive a child together. Or not. But this time I’m not really worried, as I know I won’t be here to see it. At least I hope not, as the prolonging of life in old age doesn’t look appealing to me. But then, who knows, maybe in the midst of all these breakthroughs, mankind will discover the secret of keeping us young and fit, if not forever – forever is really a long time – at least until the end of our lives. If this meant we could be able to age without all the illnesses and disabilities, then it would be another story. Sci fi, perhaps, but certainly appealing.