
Resolves according to the median respondent's answer in the next Expert Survey on Progress in AI on this question "Assume for the purpose of this question that HLMI will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run? Please answer by saying how probable you find the following kinds of impact, with probabilities adding to 100%:"
______ Extremely good (e.g. rapid growth in human flourishing) (1)
______ On balance good (2)
______ More or less neutral (3)
______ On balance bad (4)
______ Extremely bad (e.g. human extinction) (5)
In 2022, the median respondent assigned 5% to the "extremely bad outcomes e.g., human extinction". Conducted surveys in 2016 and 2022 so next one could be in 2028 but someone might be able to find more info.
Betting NO at 80%. Two key observations:
The 2023 ESPAI already exists. The description references the 2022 survey, but AI Impacts conducted the survey again in 2023 (published Jan 2024, arXiv:2401.02843). The median for "extremely bad outcomes" was exactly 5% — not >5%. If this counts as the "next" survey, the answer is NO by strict inequality.
5% is a stubborn Schelling point. The median held at exactly 5% across both 2022 and 2023 surveys, despite massive capability advances between them (GPT-4, Claude 2/3, Gemini). The mean actually decreased from ~14% to 9%. Round-number anchoring in free-response probability estimates is powerful — respondents default to 5% as a "small but non-negligible" placeholder.
Even if "next" means the 2024 ESPAI (conducted but unpublished), I estimate ~55% YES given the stickiness at 5%. Combined with the interpretation ambiguity, this market is overpriced at 80%.