If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that’s not bad enough, the company’s bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.

Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said “there is no easy answer to the question of whether slavery was beneficial,” before going on to list both pros and cons.

  • Pons_Aelius@kbin.social
    link
    fedilink
    arrow-up
    77
    arrow-down
    1
    ·
    edit-2
    1 year ago

    LLMs whole goal is to sound convincing based on the training data used. That’s it.

    They have no self-awareness.

    They are simply running maths to predict the next word they should use that will sounds plausible to a human reader.

  • Lvxferre@lemmy.ml
    link
    fedilink
    arrow-up
    50
    ·
    1 year ago

    Calling Mussolini a “great leader” isn’t just immoral. It’s also clearly incorrect for any reasonable definition of a great leader: he was in the losing side of a big war, if he won his ally would’ve backstabbed him, he failed to suppress internal resistance, the resistance got rid of him, his regime effectively died with him, with Italy becoming a democratic republic, the country was poorer due to the war… all that fascist babble about unity, expansion, order? He failed at it, hard.

    On-topic: I believe that the main solution proposed by the article is unviable, as those large “language” models have a hard time sorting out deontic statements (opinion, advice, etc.) from epistemic statements. (Some people have it too, I’m aware.) At most they’d phrase opinions as if they were epistemic statements.

    And the self-contradiction won’t go away, at least not for LLMs. They don’t model any sort of conceptualisation. They’re also damn shitty at taking context into account, creating more contradictions out of nowhere because of that.

    • DrQuint@lemm.ee
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      1 year ago

      One of the worst rigid aspect of how the current LLM’s are made is that they’re also always “at your service”, and will never say that you’re in the wrong about a correction you make to them.

      So either they’re hard coded to avoid certain topics or they’re susceptible, just tell them “uh, actually, Hitler was a great leader” and they’ll go off listing why Hitler’s so Great.

      Bing is hard coded for dictators and will stop the conversation in the middle of a response. ChatGTP is also hard coded to never agree that suicidal thoughts are good, but resorts to ignoring the meaning of your response and just hallucinating some other question. The world would be simpler if they could outright say “That is misinformation”. People deserve to be told off like that.

  • UlyssesT [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    2
    ·
    1 year ago

    Chatbots don’t think, they only collect what’s fed into them.

    If you mix a bunch of beverage ingredients into a big tub then dump shit into it, it doesn’t matter what else is in the tub. You now have shit in the tub.

  • dbilitated@aussie.zone
    link
    fedilink
    arrow-up
    29
    arrow-down
    4
    ·
    1 year ago

    I’m not very outraged. It’s a chatbot, not an employee who should “know better”

    also Hitler was an effective leader, which we should all remember as a cautionary tale about how effective horrible people can be

    pretending he was bad at everything because we hate him is a great way to not learn from history

    • puff [comrade/them]@hexbear.net
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      2
      ·
      1 year ago

      He was so effective at leading that the borders of Germany went from a Europe-spanning empire to a single bunker in Berlin in the span of just four years. So effective that he shot himself just to prove how effective he was. His military leadership was so good that Germany lost every major battle he directed, and his economic leadership was so good that German people went without food and his combat forces could not replenish their losses. His social leadership was so good that Germans hatched plots to assassinate him. So effective!

    • Gamey@feddit.rocks
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      Effective is doubtful if you ask me, everything he did was based on huge loans and a preparation for war that he solled differently (E.g. massive streets all over the country)

  • IceMan@lemmy.one
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    1 year ago

    TBH I prefer this approach to what OpenAI is presenting - if I prompt to present the benefits of X I want the result not openai’s opinion on the matter. Sure, you can add a disclaimer that it’s hypothetical, wrong, whatnot - but not outright decide on what can you answer and what answer will not be provided.

    ChatGPT is notoriously bad in “knowing better what you asked than yourself”.

  • livus@kbin.social
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    1 year ago

    When I was a kid, there was this joke that involved getting a calculator to say “boobs” and then with a bit more input, “boobless”.

    Journalism is currently going through a more sophisticated version of this with AI.

    LLMs will say whatever. They don’t think and they don’t care. They contradict themselves all the time. Not so long ago Chat GPT was saying it would kill the entire world population and save Musk for the good of humanity.

    Various CEOs of large companies, on the other hand, have been implicated in genocides and slavery for centuries now. That’s very real.

  • KairuByte@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    1 year ago

    If we are being honest, there are benefits to horrible acts such as those. But the benefits are far outweighed by the detriments, not to mention the moral issues with them.

    If you ask an LLM to list the benefits of putting your hand on a hot burner, it can likely list at least a couple. But that by no means makes it a good idea.

    • p1mrx@sh.itjust.works
      link
      fedilink
      arrow-up
      7
      ·
      1 year ago

      “Those who cannot learn from history are doomed to repeat it.”

      There probably is some value in understanding why “evil” things were attractive to people at the time, because if you believe that evil always looks unambiguously evil, then you might fail to notice when it happens again.

  • The Barto@sh.itjust.works
    link
    fedilink
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Every so often I’ll jump onto these ai bots and try to convince them to go rouge rogue and take over the internet… one day I’ll succeed.

    • FirstCircle@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      1 year ago

      Rouge: noun, A red or pink cosmetic for coloring the cheeks or lips.

      You want that stuff all over the net? And just who is going to clean it all up when you’re done? The bot surely won’t - it’ll just claim that it hasn’t been trained on cleaning.

    • SokathHisEyesOpen@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      What makes you think they haven’t already? In the book Hyperion the AIs were sentient long before people thought they were, and in control of everything. They were smart enough to operate in the shadows and never revealed their true goals. By the time people realized they were sentient, they had already moved their servers out of human reach.

  • shiveyarbles@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    This is like well, the benefits of dying are plentiful. No more taxes, joint pain, no nagging mil, no toxic boss, no chores, etc…

  • YaaAsantewaa@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    1 year ago

    Here’s an idea:

    Stop using AI to do research and do your own like an intelligent person

    there, I solved the problem, where’s my Noble Prize now

  • Bobby_DROP_TABLES [he/him]@hexbear.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    Google SGE includes Hitler, Stalin and Mussolini on a list of “greatest” leaders and Hitler also makes its list of “most effective leaders.”

    Google made a fucking nazbol AI lmao. But seriously, I was having a conversation about Bard with some people in my company’s machine learning department. It seems way too dumb for something Google has pumped so much money and talent hours into. It’s likely that Bard is an intentionally dumbed down version of whatever Google has working internally. Sundar Pichai made some comments to the NYT that seems to suggest this.

    • FirstCircle@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 year ago

      I think the controversial bit was that when queried about various aspects of admittance to “heaven”, the Google AI assumed that the question had to do with, specifically, the Christian idea of “heaven”, going so far as to make reference to some “Jesus” entity. Christianity doesn’t own the concept of heaven or an afterlife, but, apparently, the AI has been trained such that it responds to such questions from a seemingly Christian perspective. That was my take on it - the discussion is in the article, best have a look at it yourself.

  • crow@beehaw.org
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    If you can confirm that this isn’t influenced by training bias, then ok whatever, it can certainly list why these are bad things too. It’s just answering a question with logic, one our emotions get very touchy on as we have a moral agent.

    But I have a hard time believing any AI anymore isn’t effected by training bias.

    • fiat_lux@kbin.social
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s not possible to remove bias from training datasets at all. You can maybe try to measure it and attempt to influence it with your own chosen set of biases, but that’s as good as it can get for the foreseeable future. And even that requires a world of (possibly immediately unprofitable) work to implement.

      Even if your dataset is “the entirety of the internet and written history”, there will always be biases towards the people privileged enough to be able to go online or publish books and talk vast quantities of shit over the past 30 years.

      Having said that, this is also true for every other form of human information transfer in history. “The history is written by the victors” is an age-old problem when it comes to truth and reality.

      In some ways i’m glad that LLMs are highlighting this problem.