Just the other day I asked AI to create an image showing a diverse group of people, it didn’t include a single black person. I asked it to rectify and it couldn’t do it. This went on a number of times before I gave up. There’s still a long way to go.

  • SSUPII@sopuli.xyz
    link
    fedilink
    arrow-up
    3
    ·
    3 days ago

    I am instead thinking it will instead not be the case? Bigger models will be able to store more of the less common realities

    • Eq0
      link
      fedilink
      arrow-up
      10
      ·
      3 days ago

      They will, at best, replicate the data sets. They will learn racial discrimination and propagate it.

      If you have a deterministic system, for example, to rate a CV, you can ensure that no obvious negative racial bias is included. If instead you have a LLM (or other AI) there is no supervision on which data element is used and how. The only thing we can check is if the predictions match the (potentially racist) data.

    • luxyr42@lemmy.dormedas.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      You may be able to prompt for the less common realities, but the default of the model is still going to see “doctor” as a white man.