This is where things get to sci-fi level, and borrow heavily from the tech-centered horror movies. Although there is much debate in the scientific and tech community, and even in philosophy circles about whether the potential of AI and the speed of human advancement is blown out of proportion, there are more than a handful institutes in the world’s top-notch universities and respected NGOs with billions in the budget, focusing on AI safety, governance, and “existential risk”. Existential risk implies the the extinction of human species as a result of the over-advancement of AI. The term used for these doomsday scenarios is “ecophagy”, and people who deem it possible don’t understand why humans seem so eager about adding a species to the food chain that would be above them. Oxford’s Future of Humanity Institute declares its goal as “to contribute to safe, robust AGI systems that are aligned with human values.” This sounds reassuring, but is it? It is easy to dismiss ecophagic scenarios because AI has not displayed such capabilities yet, plus, there is no precedent example of machine brutality. But violence and brutality are human traits, and the stories of human brutality are abound in history. So, even if we assume that AGI tamely complies with the human interests, who could guarantee that those interests will serve a global good? What keeps one nation from using AGI against another? More importantly, could we survive the Hiroshima of AI?As the term “human values” do not have a single or uniform denotation, the belief that we could create morally intact intelligent machines seems naive. In the end, even if all these institutes and initiatives determine the right normative approach to keep AI under control, who, including their own staff, can say with confidence that world powers will comply? Moreover, who can guarantee that the species, despite its apparent feebleness against nature, that climbed on top of the food chain will not awaken its rusty yet deep-rooted reflexes to antagonize a superior power, even if that power is her/his own making? There you have it. The real Frankenstein story. If we follow this path further, we can perhaps foresee the birth of ideologies such as “speciesism”, or “organic-centeredness” which could be thought as wider-scope racism. If AGI or ASI becomes a reality, like many breakthroughs and inventions before them, it is likely that their use will largely be determined either by an economic or political elite, or by the tastes of the market. Then, life on earth might mimic the structure of a Greek tragedy, by making humans pay for their hubris, and waiting for a Deus ex machina that might save them.