Paperclip maximizer would suggest the ai will purposely kill undesirables to “save” others because no option to avoid harm in the situation was presented in the training data.