• PeriodicallyPedantic@lemmy.ca
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    10 hours ago

    Right, but that’s kind of like saying “I don’t kill babies” while you use a product made from murdered baby souls. Yes you weren’t the one who did it, but your continued use of it caused the babies too be killed.

    There is no ethical consumption under capitalism and all that, but I feel like here is a line were crossing. This fruit is hanging so low it’s brushing the grass.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      9 hours ago

      Are you interpreting my statement as being in favour of training AIs?

      • PeriodicallyPedantic@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        I’m interpreting your statement as “the damage is done so we might as well use it”
        And I’m saying that using it causes them to train more AIs, which causes more damage.

        • jsomae@lemmy.ml
          link
          fedilink
          English
          arrow-up
          1
          ·
          6 hours ago

          I agree with your second statement. You have misunderstood me. I am not saying the damage is done so we might as well use it. I am saying people don’t understand that it is the training of AIs which is directly power-draining.

          I don’t understand why you think that my observation people are ignorant about how AIs work is somehow an endorsement that we should use AIs.

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            5 hours ago

            I guess.

            It still smells like an apologist argument to be like “yeah but using it doesn’t actually use a lot of power”.

            I’m actually not really sure I believe that argument either, through. I’m pretty sure that inference is hella expensive. When people talk about training, they don’t talk about the cost to train on a single input, they talk about the cost for the entire training. So why are we talking about the cost to infer on a single input?
            What’s the cost of running training, per hour? What’s the cost of inference, per hour, on a similarly sized inference farm, running at maximum capacity?

            • jsomae@lemmy.ml
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 hours ago

              Maybe you should stop smelling text and try reading it instead. :P

              Running an LLM in deployment can be done locally on one’s machine, on a single GPU, and in this case is like playing a video game for under a minute. OpenAI models are larger than by a factor of 10 or more, so it’s maybe like playing a video game for 15 minutes (obviously varies based on the response to the query.)

              It makes sense to measure deployment usage marginally based on its queries for the same reason it makes sense to measure the environmental impact of a car in terms of hours or miles driven. There’s no natural way to do this for training though. You could divide training by the number of queries, to amortize it across its actual usage, which would make it seem significantly cheaper, but it comes with the unintuitive property that this amortization weight goes down as more queries are made, so it’s unclear exactly how much of the cost of training should be assigned to a given query. It might make more sense to talk in terms of expected number of total queries during the lifetime deployment of a model.