ylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 10 months agoAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comexternal-linkmessage-square49fedilinkarrow-up1169arrow-down111cross-posted to: worldnews@lemmit.onlinetechnology@lemmy.world
arrow-up1158arrow-down1external-linkAI chatbots tend to choose violence and nuclear strikes in wargameswww.newscientist.comylai@lemmy.ml to Not The Onion@lemmy.worldEnglish · 10 months agomessage-square49fedilinkcross-posted to: worldnews@lemmit.onlinetechnology@lemmy.world
minus-squareCar@lemmy.dbzer0.comlinkfedilinkEnglisharrow-up0·edit-210 months agoThe writers of the article are reporting on use of these models by the military. They aren’t using the models. If I remember right they called out some models developed by one of the defense contractors like palantir
minus-squareEch@lemm.eelinkfedilinkEnglisharrow-up1·10 months ago The researchers tested LLMs such as OpenAI’s GPT-3.5 and GPT-4, Anthropic’s Claude 2 and Meta’s Llama 2 All these AIs are supported by Palantir’s commercial AI platform – though not necessarily part of Palantir’s US military partnership Also, they’re reporting on a Stanford study of how these platforms could be used militaristically, not the military’s actual use of them.
The writers of the article are reporting on use of these models by the military. They aren’t using the models. If I remember right they called out some models developed by one of the defense contractors like palantir
Also, they’re reporting on a Stanford study of how these platforms could be used militaristically, not the military’s actual use of them.