• fidodo@lemmy.world
    link
    fedilink
    arrow-up
    69
    arrow-down
    5
    ·
    1 year ago

    AI has been able to do fingers for months now. It’s moving very rapidly so it’s hard to keep up. It doesn’t do them perfectly 100% of the time, but that doesn’t matter since you can just regenerate it until it gets it right.

    • YoorWeb@lemmy.world
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      1 year ago

      “For your verification please close left eye and run two fingers through your hair while eating a cauliflower with whipped cream. Attach a paperclip to your left ear and write your username on your forehead using an orange marker.”

    • Paradachshund@lemmy.today
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      You could probably just set up a time for the person to send a photo, and then give them a keyword to write on the paper, and they must send it within a very short time. Combine that with a weird gesture and it’s going to be hard to get a convincing AI replica. Add another layer of difficulty and require photos from multiple angles doing the same things.

      • Vampiric_Luma@lemmy.ca
        link
        fedilink
        arrow-up
        10
        arrow-down
        1
        ·
        1 year ago

        Lornas can be supplied to the AI. These are data sets of specific ideas like certain hand gestures, lighting levels, whatever style you need you can fine-tune the general data set with lornas.

        I have the minimum requirements to produce art and HQ output takes 2 minutes. Low-quality only takes seconds. I can fine-tune my art on a LQ level, then use the AI to upscale it back to HQ. This is me being desperate, too, using only local software and my own hardware.

        Do this through a service or a gpu farm and you can spit it out much quicker. The services I’ve used are easy to figure out and do great work for free* in a lot of cases.

        I think these suggestions will certainly be barriers and I can think of some more stop-gaps, but they won’t stop everyone from slipping through the cracks especially as passionate individuals hyper-focus on technology we think in passing continue working on it.

      • fidodo@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        1 year ago

        Simpler thing is to just have the user take a video. I’ve already seen that in practice.

      • ExperimentalGuy@programming.dev
        link
        fedilink
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I feel like there’s a way to get around that… Like if you really wanted, some sort of system to Photoshop the keyword onto the piece of paper. This would allow you to generate the image but also not have to worry ab the AI generating that.

        Edit: also does anyone remember that one paper that had to do with a new AI architecture where you could put in some sort of negative image to additionally prompt an AI for a specific shape, output, or position.

        • Unkn8wn69@monero.town
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          Just write on paper and overlay via Photoshop. Photopea has a literal one button click function for that very easy to do. Just blank paper and picture with enough light. Very easy