Google’s AI-driven Search Generative Experience have been generating results that are downright weird and evil, ie slavery’s positives.

  • andallthat@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 year ago

    it doesn’t even look at the smaller picture. LLMs build sentences by looking at what’s most statistically likely to follow the part of the sentence they already already built (based on the most frequent combinations from their training data). If they start with “Hitler was effective” LLMs don’t make any ethical consideration at all… they just look at how to end that sentence in the most statistically convincing imitation of human language that they can.

    Guardrails are built by painstakingly trying to add ad-hoc rules not to generate “combinations that contain these words” or “sequences of words like these”. They are easily bypassed by asking for the same concept in another way that wasn’t explicitly disabled. But there’s no “concept” to LLMs, just combination of words.

    • lolcatnip@reddthat.com
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Yes, but in many defense the “smaller picture” I was alluding to was more like the 4096 tokens of context ChatGPT uses. I didn’t mean to suggest it was doing anything we’d recognize as forming an opinion.