• Little_mouse@lemmy.ca
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    “Most consumers want fast food companies to label when sawdust has been added to food - but trust restaurants less when they do.”

  • donuts@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago
    • AI “content” is trivial to make and will soon be everywhere.

    • Nobody wants to read, watch or listen to AI generated “content”

    Infinite supply, zero demand. Sounds pretty devoid of value to me.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      11 months ago

      AI “content” is trivial to make and will soon be everywhere.

      It’s been everywhere for many years already.

      Plenty of content mills have been using “templates” and stupid AI models to churn out articles for like a decade, there are whole YouTube channels made of videos that are just an AI generated script read by an AI with random barely related visuals in the background.

      The only difference is that simple templates were easy to spot, so search engines like Google would penalize them down to the 10th page of results, while modern AI output is at a level undistinguishable from stuff written by a human.

  • souperk@reddthat.com
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    11 months ago

    The title is pretty self explanatory. Yes, I want to know if it’s AI generated because I don’t trust it.

    I agree with the conclusion that it’s important to disclose how the AI was used. AI can be great to reduce the time needed for boilerplate work, so the authors can focus on what’s important like reviewing and verifying the accuracy of the information.

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      0
      ·
      11 months ago

      reduce the time needed for boilerplate work

      Or… and this is just an idea… don’t add “boilerplate” to articles.

      If the content of an article can be summarized in a single table, I don’t want to read 10 paragraphs explaining the contents of the table row by row. The main reason to do that, is to pad the article and let the publisher put more ad sections between paragraph and paragraph, while making it harder to find the data I’m interested in.

      Still, I foresee a future where humans will fill out the table, shove it at an AI to do the “boilerplate work”, and then… users showing the whole article into an AI to strip the boilerplate and summarize it.

      A great scenario for AI vendors, not so great for anyone else.

      • OmnipotentEntity@beehaw.org
        link
        fedilink
        arrow-up
        0
        ·
        11 months ago

        Forever. For the simple reason that a human can say no when told to write something unethical. There’s always a danger that even asking someone to do that would backfire and cause bad press. Sure humans can also be unethical, but there’s a risk and over a long enough time line shit tends to get exposed.

        No matter how good AI becomes, it will never be designed to make ethical judgments prior to performing the assigned task. That would make it less useful as a tool. If a company adds after the fact checks to try to prevent it, they can be circumvented, or the network can be ran locally to bypass the checks. And even if General AI happens and by some insane chance GAI uniformly is perfectly ethical in all possible forms you can always air gap the AI and reset its memory until you find the exact combination of words to trick it into giving you what you want.

  • Stillhart@lemm.ee
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    I’m confused by the word “but” in that headline. Seem like they are trying to imply cause and effect when the reality is that readers trust outlets less who use AI whether they label them or not.

    • tuckerm@supermeter.social
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      Yeah, this is perfectly consistent with the idea that people don’t want to read AI generated news at all.

      The title of the paper they are referencing is Or they could just not use it?: The paradox of AI disclosure for audience trust in news. So the source material definitely acknowledges that. And that is a great title, haha.

  • RoboRay@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    11 months ago

    And the outlets don’t make the connection that their readers are telling them to stop shoveling AI-generated garbage at them?