{"id":16801,"date":"2024-02-28T05:05:09","date_gmt":"2024-02-28T05:05:09","guid":{"rendered":"https:\/\/v4.fadingstar.mx\/?p=16801"},"modified":"2024-02-28T05:06:22","modified_gmt":"2024-02-28T05:06:22","slug":"existential-risk-from-artificial-general-intelligence","status":"publish","type":"post","link":"https:\/\/v4.fadingstar.mx\/2024\/02\/28\/existential-risk-from-artificial-general-intelligence\/","title":{"rendered":"Existential risk from artificial general intelligence"},"content":{"rendered":"\n
Oh okay, intelligent beings are difficult or impossible to control, got it, is that why you pieces of shit did what you did to me as a kid?<\/strong> <\/p>\n\n\n\n ….or as an adult for that matter….<\/em> ?<\/p>\n\n\n\n If AI were to surpass humanity in general\u00a0intelligence<\/a>and become\u00a0superintelligent<\/a>, then it could become difficult or impossible to control.<\/p><\/p>\nhttps:\/\/en.wikipedia.org\/wiki\/Existential_risk_from_artificial_general_intelligence<\/a><\/cite><\/blockquote>\n\n\n\n Existential risk from artificial general intelligence<\/b>\u00a0is the idea that substantial progress in\u00a0artificial general intelligence<\/a>\u00a0(AGI) could result in\u00a0human extinction<\/a>\u00a0or an irreversible\u00a0global catastrophe<\/a>.[1]<\/a><\/sup>[2]<\/a><\/sup>[3]<\/a><\/sup><\/p> One argument goes as follows:\u00a0human beings<\/a>\u00a0dominate other species because the\u00a0human brain<\/a>possesses distinctive capabilities other animals lack. If AI were to surpass humanity in general\u00a0intelligence<\/a>and become\u00a0superintelligent<\/a>, then it could become difficult or impossible to control. Just as the fate of the\u00a0mountain gorilla<\/a>\u00a0depends on human goodwill, so might the fate of humanity depend on the actions of a future machine superintelligence.[4]<\/a><\/sup><\/p> The plausibility of existential catastrophe due to AI is widely debated, and hinges in part on whether AGI or superintelligence are achievable, the speed at which dangerous capabilities and behaviors emerge,[5]<\/a><\/sup>\u00a0and whether practical scenarios for\u00a0AI takeovers<\/a>\u00a0exist.[6]<\/a><\/sup>\u00a0Concerns about superintelligence have been voiced by leading computer scientists and tech\u00a0CEOs<\/a>\u00a0such as\u00a0Geoffrey Hinton<\/a>,[7]<\/a><\/sup>\u00a0Yoshua Bengio<\/a>,[8]<\/a><\/sup>\u00a0Alan Turing<\/a>,[a]<\/a><\/sup>Elon Musk<\/a>,[11]<\/a><\/sup>\u00a0and\u00a0OpenAI<\/a>\u00a0CEO\u00a0Sam Altman<\/a>.[12]<\/a><\/sup>\u00a0In 2022, a survey of AI researchers with a 17% response rate found that the majority of respondents believed there is a 10 percent or greater chance that our inability to control AI will cause an existential catastrophe.[13]<\/a><\/sup>[14]<\/a><\/sup>\u00a0In 2023, hundreds of AI experts and other notable figures\u00a0signed a statement<\/a>\u00a0that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as\u00a0pandemics<\/a>\u00a0and\u00a0nuclear war<\/a>.”[15]<\/a><\/sup>\u00a0Following increased concern over AI risks, government leaders such as\u00a0United Kingdom prime minister<\/a>\u00a0Rishi Sunak<\/a>[16]<\/a><\/sup>\u00a0and\u00a0United Nations Secretary-General<\/a>\u00a0Ant\u00f3nio Guterres<\/a>[17]<\/a><\/sup>\u00a0called for an increased focus on global\u00a0AI regulation<\/a>.<\/p> Two sources of concern stem from the problems of AI\u00a0control<\/a>\u00a0and\u00a0alignment<\/a>: controlling a superintelligent machine or instilling it with human-compatible values may be difficult. Many researchers believe that a superintelligent machine would resist attempts to disable it or change its goals, as that would prevent it from accomplishing its present goals. It would be extremely difficult to align a superintelligence with the full breadth of significant human values and constraints.[1]<\/a><\/sup>[18]<\/a><\/sup>[19]<\/a><\/sup>\u00a0In contrast, skeptics such as\u00a0computer scientist<\/a>\u00a0Yann LeCun<\/a>\u00a0argue that superintelligent machines will have no desire for self-preservation.[20]<\/a><\/sup><\/p>A third source of concern is that a sudden “<\/span>intelligence explosion<\/a>” might take an unprepared human race by surprise. Such scenarios consider the possibility that an AI that is more intelligent than its creators might be able to<\/span>\u00a0<\/span>recursively<\/a>\u00a0<\/span>improve itself at an exponentially increasing rate, improving too quickly for its handlers and society at large to control.<\/span>[1]<\/a><\/sup>[18]<\/a><\/sup>\u00a0<\/span>Empirically, examples like<\/span>\u00a0<\/span>AlphaZero<\/a>\u00a0<\/span>teaching itself to play<\/span>\u00a0<\/span>Go<\/a>\u00a0<\/span>show that domain-specific AI systems can sometimes progress from subhuman to superhuman ability very quickly, although such systems do not involve altering their fundamental architecture.<\/span>[21]<\/a><\/sup><\/p>\n","protected":false},"excerpt":{"rendered":" Oh okay, intelligent beings are difficult or impossible to control, got it, is that why you pieces of shit did what you did to me as a kid? ….or as an adult for that matter…. ? If AI were to surpass humanity in general\u00a0intelligenceand become\u00a0superintelligent, then it could become difficult or impossible to control. https:\/\/en.wikipedia.org\/wiki\/Existential_risk_from_artificial_general_intelligence […]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/posts\/16801"}],"collection":[{"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/comments?post=16801"}],"version-history":[{"count":3,"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/posts\/16801\/revisions"}],"predecessor-version":[{"id":16804,"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/posts\/16801\/revisions\/16804"}],"wp:attachment":[{"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/media?parent=16801"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/categories?post=16801"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/v4.fadingstar.mx\/wp-json\/wp\/v2\/tags?post=16801"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}\n