Egzistencijalni rizik od opšte veštačke inteligencije — разлика између измена

С Википедије, слободне енциклопедије
Садржај обрисан Садржај додат
.
(нема разлике)

Верзија на датум 17. март 2024. у 21:59

Egzistencijalni rizik od veštačke opšte inteligencije is the idea that substantial progress in artificial general intelligence (AGI) could result in human extinction or an irreversible global catastrophe.[1][2][3]

One argument goes as follows: human beings dominate other species because the human brain possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general intelligence and become superintelligent, then it could become difficult or impossible to control. Just as the fate of the mountain gorilla depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]

The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge,[5] and whether practical scenarios for AI takeovers exist.[6] Concerns about superintelligence have been voiced by leading computer scientists and tech CEOs such as Geoffrey Hinton,[7] Yoshua Bengio,[8] Alan Turing,[а] Elon Musk,[11] and OpenAI CEO Sam Altman.[12] In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe.[13][14] In 2023, hundreds of AI experts and other notable figures signed a statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."[15] Following increased concern over AI risks, government leaders such as United Kingdom prime minister Rishi Sunak[16] and United Nations Secretary-General António Guterres[17] called for an increased focus on global AI regulation.

Two sources of concern stem from the problems of AI control and alignment: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as that would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[1][18][19] In contrast, skeptics such as computer scientist Yann LeCun argue that superintelligent machines will have no desire for self-preservation.[20]

A third source of concern is that a sudden "intelligence explosion" might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI that is more intelligent than its creators might be able to recursively improve itself at an exponentially increasing rate, improving too quickly for its handlers and society at large to control.[1][18] Empirically, examples like AlphaZero teaching itself to play Go show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such systems do not involve altering their fundamental architecture.[21]

Napomene

  1. ^ In a 1951 lecture[9] Turing argued that "It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler's Erewhon." Also in a lecture broadcast on the BBC[10] he expressed the opinion: "If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning off the power at strategic moments, we should, as a species, feel greatly humbled. . . . This new danger . . . is certainly something which can give us anxiety."

Reference

  1. ^ а б в Russell, Stuart; Norvig, Peter (2009). „26.3: The Ethics and Risks of Developing Artificial Intelligence”. Artificial Intelligence: A Modern Approach. Prentice Hall. ISBN 978-0-13-604259-4. 
  2. ^ Bostrom, Nick (2002). „Existential risks”. Journal of Evolution and Technology. 9 (1): 1—31. 
  3. ^ Turchin, Alexey; Denkenberger, David (2018-05-03). „Classification of global catastrophic risks connected with artificial intelligence”. AI & Society. 35 (1): 147—163. ISSN 0951-5666. S2CID 19208453. doi:10.1007/s00146-018-0845-5. 
  4. ^ Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies (First изд.). Oxford University Press. ISBN 978-0-19-967811-2. 
  5. ^ Vynck, Gerrit De (2023-05-23). „The debate over whether AI will destroy us is dividing Silicon Valley”. Washington Post (на језику: енглески). ISSN 0190-8286. Приступљено 2023-07-27. 
  6. ^ Metz, Cade (2023-06-10). „How Could A.I. Destroy Humanity?”. The New York Times (на језику: енглески). ISSN 0362-4331. Приступљено 2023-07-27. 
  7. ^ „"Godfather of artificial intelligence" weighs in on the past and potential of AI”. www.cbsnews.com (на језику: енглески). 25. 3. 2023. Приступљено 2023-04-10. 
  8. ^ „How Rogue AIs may Arise”. yoshuabengio.org (на језику: енглески). 26. 5. 2023. Приступљено 2023-05-26. 
  9. ^ Turing, Alan (1951). Intelligent machinery, a heretical theory (Говор). Lecture given to '51 Society'. Manchester: The Turing Digital Archive. Архивирано из оригинала 26. 9. 2022. г. Приступљено 2022-07-22. 
  10. ^ Turing, Alan (15. 5. 1951). „Can digital computers think?”. Automatic Calculating Machines. Епизода 2. Can digital computers think?. BBC. 
  11. ^ Parkin, Simon (14. 6. 2015). „Science fiction no more? Channel 4's Humans and our rogue AI obsessions”. The Guardian (на језику: енглески). Архивирано из оригинала 5. 2. 2018. г. Приступљено 5. 2. 2018. 
  12. ^ Jackson, Sarah. „The CEO of the company behind AI chatbot ChatGPT says the worst-case scenario for artificial intelligence is 'lights out for all of us'. Business Insider (на језику: енглески). Приступљено 2023-04-10. 
  13. ^ „The AI Dilemma”. www.humanetech.com (на језику: енглески). Приступљено 2023-04-10. „50% of AI researchers believe there's a 10% or greater chance that humans go extinct from our inability to control AI. 
  14. ^ „2022 Expert Survey on Progress in AI”. AI Impacts (на језику: енглески). 2022-08-04. Приступљено 2023-04-10. 
  15. ^ Roose, Kevin (2023-05-30). „A.I. Poses 'Risk of Extinction,' Industry Leaders Warn”. The New York Times (на језику: енглески). ISSN 0362-4331. Приступљено 2023-06-03. 
  16. ^ Sunak, Rishi (14. 6. 2023). „Rishi Sunak Wants the U.K. to Be a Key Player in Global AI Regulation”. Time. 
  17. ^ а б Fung, Brian (2023-07-18). „UN Secretary General embraces calls for a new UN agency on AI in the face of 'potentially catastrophic and existential risks'. CNN Business (на језику: енглески). Приступљено 2023-07-20. 
  18. ^ а б Yudkowsky, Eliezer (2008). „Artificial Intelligence as a Positive and Negative Factor in Global Risk” (PDF). Global Catastrophic Risks: 308—345. Bibcode:2008gcr..book..303Y. Архивирано (PDF) из оригинала 2. 3. 2013. г. Приступљено 27. 8. 2018. 
  19. ^ Russell, Stuart; Dewey, Daniel; Tegmark, Max (2015). „Research Priorities for Robust and Beneficial Artificial Intelligence” (PDF). AI Magazine. Association for the Advancement of Artificial Intelligence: 105—114. Bibcode:2016arXiv160203506R. arXiv:1602.03506Слободан приступ. Архивирано (PDF) из оригинала 4. 8. 2019. г. Приступљено 10. 8. 2019. , cited in „AI Open Letter - Future of Life Institute”. Future of Life Institute. јануар 2015. Архивирано из оригинала 10. 8. 2019. г. Приступљено 2019-08-09. 
  20. ^ Dowd, Maureen (април 2017). „Elon Musk's Billion-Dollar Crusade to Stop the A.I. Apocalypse”. The Hive (на језику: енглески). Архивирано из оригинала 26. 7. 2018. г. Приступљено 27. 11. 2017. 
  21. ^ „AlphaGo Zero: Starting from scratch”. www.deepmind.com (на језику: енглески). Приступљено 2023-07-28. 

Literatura