seach-icon
  • user-img

    QONEQT in News

    16-Sep-2022 12:48 PM


    thumbnail

    AI likely to cause extinction of humans, warn Oxford and Google scientists

    Looks like war between humans and machines is no longer a mere plot of Matrix, a movie in which machines go to war with humans over their energy requirements. In a research paper, two scientists from Oxford University and one Google researcher have argued that advanced AI (artificial intelligence) will wipe out humans because machines will inevitably compete with humans for their energy needs.

    The study, published in the Journal AI Magazine last month, states that the threat posed by AI would be more serious than what is assumed at the moment. The paper clearly highlights that AI, once it is sufficiently advanced, may kill the entire human race.

    The research team, which includes Google DeepMind senior scientist Marcus Hutter and Oxford researchers Michael Cohen and Michael Osborne, affirms that AI in the future will violate the rules of its human creators.

    While it is not clear which rules the researchers are talking about, the rules could be classic commandments like "A robot may not injure a human being or, through inaction, allow a human being to come to harm" which, although considered a staple of science fiction after they were coined by Isaac Asimov, are now often used as basic guidelines on which AI is coded and built.

    But researchers warn that once machines and AI have become sufficiently advanced, they will compete with humans for resources, most notably energy, and then will violate the rules that mandate their interactions with their creators.

    Cohen, Oxford University engineering student and co-author of the paper, tweets, "Under the conditions we have identified, our conclusion is much stronger than that of any previous publication - an existential catastrophe is not just possible, but likely."

    The researchers contend in their report that super advanced "misaligned agents" in future will view humanity as getting in the way of a reward.

    "One good way for an agent to maintain long-term control of its reward is to eliminate potential threats, and use all available energy to secure its computer," write researchers. "Losing this game would be fatal (for humans)."
    Source: BusinessToday