Code used in the analysis is here

      • Thorny_Insight@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        As does most critizism about LLMs. We wanted one that behaves like a human and then got angry when that’s exactly what it does.

  • gedaliyah@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    8 months ago

    How could we possibly expect that there wouldn’t be bias? Is based off the patterns that humans use. Humans have bias. The difference is that humans he can recognize their bias and worked overcome it. As far as i know chat GPT can’t do that.

    • abhibeckert@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      8 months ago

      humans he can recognize their bias

      Can they? I’m not convinced.

      As far as i know chat GPT can’t do that.

      You do it with math. Measure how many females you have with a C level position at the company and introduce deliberate bias into hiring process (human or AI) to steer the company towards a target of 50%.

      It’s not easy, but it can be done. And if you have smart people working on it you’ll get it done.

      • JoBo@feddit.uk
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        8 months ago

        You start off by claiming that humans can’t recognise their biases and end up by saying that there’s no problem because humans can recognise their biases so well they can programme it out of AI.

        Which is it?

    • Plopp@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Because they don’t know what “AI” is so they think it’s this technical thing that just knows things, all the things, magically. I’ve seen confident statements like “we use AI in our recruiting process because it has no bias!!” 🤦‍♂️

  • backgroundcow@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 months ago

    “I’ve created this amazing program that more or less precisely mimics the response of a human to any question!”

    “What if I ask it a question where humans are well known to apply all kinds of biases? Will it give a completely unbiased answer, like some kind of paragon of virtue?”

    “No”

    <Surprised Pikachu face>

  • dgmib@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 months ago

    Job seekers next ChatGPT prompt:

    Here’s a job posting and my resume, can you tell me what to change to make me sound like a perfect fit for the role?

    ChatGPT:

    • Change name from “Latifa Tshabalala“ to “Kevin Smith” …
  • Cannibal_MoshpitV3@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    I’ve had several bosses tell me that the moment they see a stereotypical African American name they throw out the application/resume.

    • Potatos_are_not_friends@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      Honestly that works for human recruiters too.

      I have a very “Caucasian” name. For years, I had some serious confused faces during interviews, people who thought I was a great applicant on paper but then cut the meeting short because I suddenly “dont fit the culture”.

      I now include my photo in my application. Saves me time too, because I don’t want to work at a racist ass company.