• Pacattack57@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    59 minutes ago

    When I’m told there’s power issues and to conserve power I drop my AC to 60 and leave all my lights on. Only way for them to fix the grid is to break it.

  • merc@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    2 hours ago

    Worse is Google that insists on shoving a terrible AI-based result in your face every time you do a search, with no way to turn it off.

    I’m not telling these systems to generate images of cow-like girls, but I’m getting AI shoved in my face all the time whether I want it or not. (I don’t).

      • TriflingToad@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        2 hours ago

        It’s from https://perchance.org/welcome and is super cool because it’s like half a soul-less AI and half a super cool tool that gets people into programming and they actually care about the Internet because they encourage people to learn how to code their own ais and have fun with it and I would absolutely have DEVOURED it when I was 13 on Tumblr (I forgot my ADHD meds today sorry if I’m rambling)

  • Allonzee@lemmy.world
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    5
    ·
    edit-2
    3 hours ago

    We’re going away folks, and nothing of any true value will be lost, except all the species that did live in homeostasis with the Earth that we’re taking with us in our species’ avarice induced murder-suicide

    • oni ᓚᘏᗢ@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 hours ago

      I’ve been trying to write this comment the more concise possible, I’m trying my best. “We’re going away”, yes, that’s true. No matter what we are leaving this place, but, that doesn’t mean that the last days of humanity have to be surrounded by pollution and trash. All I can get of that quote in the image is that we should let big companies shit on us till we die.

      • Allonzee@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 hour ago

        “let”

        The sociopath fascist capitalists won. They have multilayered protection from us from propaganda dividing us and turning us on one another without end to government capture and having the exclusive use of state violence to protect the capital markets and literally having state sanctioned murder for for private profit. This isnt a war, this is a well oiled Orwell Ian occupation. The people surrendered half a century ago without terms and received the delusion that they’ll be the rich ones one day herp derp.

        We can’t do anything about the misery they spread, and that sucks. We don’t have to add to our misery by pretending there’s hope and we can turn any of it around. They’re going to do what we’re going to do and short of being a lone “terrorist” that takes a pot shot at them like Luigi there’s nothing to be done, because half of us are too cowardly and social opiate addicted(fast food, social media, literal opiates, etc) or too deluded and actually on the robber baron’s side out of pick me mentality to take it as a rallying cry.

        Only the planet itself, our shared habitat, can stop them. And it will, regardless of all the studies they kill, ecological treaties they betray, and all the propaganda they spread. The capitalists reign of terror will end when enough of their suckers are starving, not before.

  • burgerpocalyse@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 hours ago

    i feel it would actually kill some people to just say, yes, ai uses a lot of power, and no other qualifying statements tacked on

  • leftthegroup@lemmings.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 hours ago

    Didn’t some legislation come out banning making laws against AI? (which I realize is a fucking crazy sentence in the first place- nothing besides rights should just get immunity to all potential new laws)

    So the cities aren’t even the bad guys here. The Senate is.

    • MotoAsh@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 hour ago

      It’s both. Also don’t let the house, supreme court, or the orange buffoon and his cabinet get out of culpability. Checks and balances can work … when they all aren’t bought and paid for by rich fucks.

    • TriflingToad@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      2 hours ago

      I have llama 3.2 on my phone and it’s really funny because it’s so low powered and dumb but so sweet.

      it’s like a little friend to talk to when I don’t have Internet. he’s a lil stupid but he got the spirit

      • WorldsDumbestMan@lemmy.today
        link
        fedilink
        English
        arrow-up
        1
        ·
        52 minutes ago

        Use Qwen 2.5, that’s my recommendation. You can also set “pals”. And the best part, is I have a portable battery and solar charger, so I could theoretically (and have in the past) run it from solar alone.

  • QueenHawlSera@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    I wish the afterlife were real, so I could experience a world where God, not man, was in charge.

    Sadly that chump aint real either.

    I am not liberated by the Death of God, it is something I live in terror of.

  • dutchkimble@lemy.lol
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    Will 1 image of a girl with 5 tits take more energy to make or 5 images of a girl with only 1 tit

  • jsomae@lemmy.ml
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    15
    ·
    edit-2
    5 hours ago

    I know she’s exaggerating but this post yet again underscores how nobody understands that it is training AI which is computationally expensive. Deployment of an AI model is a comparable power draw to running a high-end videogame. How can people hope to fight back against things they don’t understand?

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 hour ago

      Right, but that’s kind of like saying “I don’t kill babies” while you use a product made from murdered baby souls. Yes you weren’t the one who did it, but your continued use of it caused the babies too be killed.

      There is no ethical consumption under capitalism and all that, but I feel like here is a line were crossing. This fruit is hanging so low it’s brushing the grass.

      • MotoAsh@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 hour ago

        Well you asked for six tits but you’re getting five. Why? Because the AI is intelligent and can count, obviously.

    • domdanial@reddthat.com
      link
      fedilink
      English
      arrow-up
      24
      ·
      4 hours ago

      I mean, continued use of AI encourages the training of new models. If nobody used the image generators, they wouldn’t keep trying to make better ones.

    • FooBarrington@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      1
      ·
      5 hours ago

      It’s closer to running 8 high-end video games at once. Sure, from a scale perspective it’s further removed from training, but it’s still fairly expensive.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        48 minutes ago

        Not at all. Not even close.

        Image generation is usually batched and takes seconds, so 700W (a single H100 SXM) for a few seconds for a batch of a few images to multiple users. Maybe more for the absolute biggest (but SFW, no porn) models.

        LLM generation takes more VRAM, but is MUCH more compute-light. Typically one has banks of 8 GPUs in multiple servers serving many, many users at once. Even my lowly RTX 3090 can serve 8+ users in parallel with TabbyAPI (and modestly sized model) before becoming more compute bound.

        So in a nutshell, imagegen (on an 80GB H100) is probably more like 1/4-1/8 of a video game at once (not 8 at once), and only for a few seconds.

        Text generation is similarly efficient, if not more. Responses take longer (many seconds, except on special hardware like Cerebras CS-2s), but it parallelized over dozens of users per GPU.


        This is excluding more specialized hardware like Google’s TPUs, Huawei NPUs, Cerebras CS-2s and so on. These are clocked far more efficiently than Nvidia/AMD GPUs.


        …The worst are probably video generation models. These are extremely compute intense and take a long time (at the moment), so you are burning like a few minutes of gaming time per output.

        ollama/sd-web-ui are terrible analogs for all this because they are single user, and relatively unoptimized.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        4 hours ago

        really depends. You can locally host an LLM on a typical gaming computer.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          1
          ·
          4 hours ago

          You can, but that’s not the kind of LLM the meme is talking about. It’s about the big LLMs hosted by large companies.

        • floquant@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          edit-2
          4 hours ago

          True, and that’s how everyone who is able should use AI, but OpenAI’s models are in the trillion parameter range. That’s 2-3 orders of magnitude more than what you can reasonably run yourself

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            3
            ·
            edit-2
            3 hours ago

            This is still orders of magnitude less than what it takes to run an EV, which are an eco-friendly form of carbrained transportation. Especially if you live in an area where the power source is renewable. On that note, it looks to me like AI is finally going to be the impetus to get the U.S. to invest in and switch to nuclear power – isn’t that altogether a good thing for the environment?

        • Thorry84@feddit.nl
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          3 hours ago

          Well that’s sort of half right. Yes you can run the smaller models locally, but usually it’s the bigger models that we want to use. It would also be very slow on a typical gaming computer and even a high end gaming computer. To make it go faster not only is the hardware used in datacenters more optimised for the task, it’s also a lot faster. This is both a speed increase per unit as well as more units being used than you would normally find in a gaming PC.

          Now these things aren’t magic, the basic technology is the same, so where does the speed come from? The answer is raw power, these things run insane amounts of power through them, with specialised cooling systems to keep them cool. This comes at the cost of efficiency.

          So whilst running a model is much cheaper compared to training a model, it is far from free. And whilst you can run a smaller model on your home PC, it isn’t directly comparable to how it’s used in the datacenter. So the use of AI is still very power hungry, even when not counting the training.

        • CheeseNoodle@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          1
          ·
          4 hours ago

          Yeh but those local models are usually pretty underpowered compared to the ones that run via online services, and are still more demanding than any game.

        • FooBarrington@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          2 hours ago

          I compared the TDP of an average high-end graphics card with the GPUs required to run big LLMs. Do you disagree?

            • FooBarrington@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 hours ago

              They are, it’d be uneconomical not to use them fully the whole time. Look up how batching works.

              • Jakeroxs@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                edit-2
                2 hours ago

                I mean I literally run a local LLM, while the model sits in memory it’s really not using up a crazy amount of resources, I should hook up something to actually measure exactly how much it’s pulling vs just looking at htop/atop and guesstimating based on load TBF.

                Vs when I play a game and the fans start blaring and it heats up and you can clearly see the usage increasing across various metrics

                • PeriodicallyPedantic@lemmy.ca
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  1 hour ago

                  He isn’t talking about locally, he is talking about what it takes for the AI providers to provide the AI.

                  To say “it takes more energy during training” entirely depends on the load put on the inference servers, and the size of the inference server farm.

                • MotoAsh@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  1 hour ago

                  One user vs a public service is apples to oranges and it’s actually hilarious you’re so willing to compare them.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 hour ago

                  My guy, we’re not talking about just leaving a model loaded, we’re talking about actual usage in a cloud setting with far more GPUs and users involved.

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 hours ago

        there is so much rage today. why don’t we uh, destroy them with facts and logic

        • Jakeroxs@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 hours ago

          Hahaha at this point even facts and logic is a rage inducing argument. “My facts” vs “Your facts”