• Gaywallet (they/it)@beehaw.orgOP
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      10 months ago

      You’re absolutely correct, yet ask someone who’s very pro AI and they might dismiss such claims as “needing better prompts”. Also many people may not be as tech informed as you are, and bringing light to algorithmic bias can help them understand and navigate the world we now live in. Dismissing the article just because you already know the answer doesn’t really encourage people to participate in a discussion.

      • Even_Adder@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        5
        ·
        10 months ago

        It’s really hard getting dark skin sometimes. A lot of the time it’s not even just the model, LoRAs and Textual Inversions make the skin lighter again so you have to try even harder. It’s going to take conscious effort from people to tune models that are inclusive. With the way media is biased right now, I feel like it’s going to take a lot of effort.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          10 months ago

          “Inclusive models” would need to be larger.

          Right now people seem to prefer smaller quantized models, with whatever set of even smaller LoRAs on top, that make them output what they want… and only include more generic elements in the base model.

            • jarfil@beehaw.org
              link
              fedilink
              arrow-up
              1
              ·
              10 months ago

              Are you ready to run a 100B FP64 parameter model? Or even a 10B FP32 one?

              Over time, I wouldn’t be surprised if 500B INT8 models became commonplace with neuromorphic RAM, but there’s still some time for that to happen.

                • jarfil@beehaw.org
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  10 months ago

                  For more inclusive models, or for current ones? In order to add something, either the size has to grow, or something would need to get pushed out (content, or quality). 4GB models are already at the limit of usefulness, both DALLE3 and SDXL run at about 12B parameters, so to make them “more inclusive” they’d have to grow.

    • Admetus@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      ·
      10 months ago

      And every single Asian game and anime tends to go for skimpy or virtual softcore with it’s female characters. Rarely you see a female character in full armor.

    • Appoxo@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      ·
      edit-2
      10 months ago

      But, that said, when I messed around with AI image generators pretty much any kind of prompt that included woman or female designations tended towards sexualized versions, even to the point of violating its own content policy.

      Tried it on the copilot app and one result had an asian but wasnt sexual but indeed very sexy in style.

      Prompt: Generate me a picture of a female wizard reading a massive book of spells

      Pictures:

      Edit:
      Female wizard: Kinda magical fantasy. Has good intentions
      Witch: Spooky and mysterious. Halloween themes
      Sorceress: Same as wizard but with my selfish/bad intentions.

      • DdCno1@beehaw.org
        link
        fedilink
        arrow-up
        20
        ·
        10 months ago

        What is sexy in style here? They are wearing loose, long-sleeved robes up to the neck. Makeup and hair are just following current trends.

          • falsem@kbin.social
            link
            fedilink
            arrow-up
            6
            ·
            10 months ago

            My experience has been that they have a tendency to make overly attractive men too. Getting it to generate anyone average nevermind ugly or with deformities (eg scars) is really hard.

  • megopie@beehaw.org
    link
    fedilink
    arrow-up
    27
    ·
    edit-2
    10 months ago

    If I had to guess, they probably did a shit job labeling training data or used pre labeled images, now where in the world could they have found huge amounts of pictures of women on the internet with the specific label of “Asian”?

    Almost like, most of what determines the quality of the output is not “prompt engineering” but actually the back end work of labeling the training data properly, and you’re not actually saving much labor over more traditional methods, just making the labor more anonymous, easier to hide, and thus easier to exploit and devalue.

    Almost like this shit is a massive farce just like the “meta verse” and crypto that will fail to be market viable and waist a shit ton of money that could have been spent on actually useful things.

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      7
      ·
      10 months ago

      They did literally nothing and seem to use the default stable diffusion model which is supposed to be a techdemo. Would have been easy to put “(((nude, nudity, naked, sexual, violence, gore)))” as the negative prompt

      • megopie@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        10 months ago

        The problem is that negative prompts can help, but when the training data is so heavily poisoned in one direction, stuff gets through.

  • jarfil@beehaw.org
    link
    fedilink
    arrow-up
    22
    ·
    edit-2
    10 months ago

    Wrong question. The right question would be:

    Why is AI as used in Lensa’s Magic Avatars App Pornifying Asian Women?

    Ask Lensa to remove the “ugly” and similar negative prompts from their avatar generating App, and let’s see what comes out.

    https://stable-diffusion-art.com/how-to-use-negative-prompts/#Universal_negative_prompt

    For reference, check out how that same negative prompt turns a chubby-ish poorly shaved average guy, into a male pornstar, or a valet into a rich daddy’s boy.

    • 1984@lemmy.today
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      10 months ago

      In 2024, the brain washing of people is almost complete.

      Sensuality is now porn. :)

  • millie@beehaw.org
    link
    fedilink
    English
    arrow-up
    10
    ·
    10 months ago

    I’m not exposed to a huge amount of media coming out of Asia, outside of a handful of Korean shows that Netflix has picked up and anime. But like, if anime is any indicator, I’m not really surprised that the training data for Asian women is leaning more toward overt sexualization. Even setting aside the whole misogynistic ‘fan service’ thing, I don’t feel like I see as much representation of women who defy traditional gender roles as the last twenty or so years of Western media.

    It certainly could be that anime is actually a huge outlier here, but if the training data is primarily from the English speaking web, it might be overrepresented anyway. But like, when it comes to weird AI image behaviors, it pays to think about the probable training data.

    Like, stable diffusion seems to do a better job of rendering jewelry if you tell it to surround it with berries. Given the output, this seems to be due to Christmas themed jewelry ads. They also tend to add a lot of bokeh for the same reason.

    • IHeartBadCode@kbin.social
      link
      fedilink
      arrow-up
      6
      ·
      10 months ago

      Absolutely this. The reason AI defaults female into “female armor mode” is the same reason Excel has January February Maruary. Our spicy autocorrect overlords cannot extrapolate data in a direction that it’s training has no knowledge of.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 months ago

      You train on a bunch of reddit crap, you’re going to get neck beard reddit crap out. It’d look different if they only used art history books.

  • webghost0101@sopuli.xyz
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    10 months ago

    While i agree there is a big issue with the bad biased and sexist training data this entire article is about the lensa app which uses (i assume) the default stable diffusion model laion-5b.

    Intentional creating sexualized pictures is banned in their guidelines. And yet no one thought of creating a good negative prompt that negates any kind of nudity or eroticism? It still doesn’t properly fix the training data but at least people aren’t unwillingly presented porn of their own images.

    Also everyone can create a dataset and build a stable diffusion model, so why is lensa relying on the default model which is more like a quick and dirty tech demo. They had all the tools to do this right but decided to not even uses the easy lazy ones.

  • Even_Adder@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    10 months ago

    If we’re talking open source models, it’s because a lot of the people fine-tuning them are Asian, and have that bias.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    10 months ago

    Stable Diffusion is little more than content laundering. It cannot create anything more than what you put in.