ylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 10 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square49fedilinkarrow-up1169arrow-down111cross-posted to: worldnews@lemmit.onlinetechnology@lemmy.world
arrow-up1158arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 10 months agomessage-square49fedilinkcross-posted to: worldnews@lemmit.onlinetechnology@lemmy.world
minus-squarefidodo@lemmy.worldlinkfedilinkEnglisharrow-up2·10 months agoThese aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.
minus-squareintensely_human@lemm.eelinkfedilinkEnglisharrow-up1·10 months agoLanguage models can extrapolate but they can also reason (by extrapolating human reasoning).
These aren’t simulations that are estimating results, they’re language models that are extrapolating off a ton of human knowledge embedded as artifacts into text. It’s not necessarily going to pick the best long term solution.
Language models can extrapolate but they can also reason (by extrapolating human reasoning).