• Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    ·
    15 hours ago

    It’s extremely wasteful. Inefficient to the extreme on both electricity and water. It’s being used by capitalists like a scythe. Reaping millions of jobs with no support or backup plan for its victims. Just a fuck you and a quip about bootstraps.

    It’s cheapening all creative endeavors. Why pay a skilled artist when your shitbot can excrete some slop?

    What’s not to hate?

    • Sibyls@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      7
      ·
      10 hours ago

      As with almost all technology, AI tech is evolving into different architectures that aren’t wasteful at all. There are now powerful models we can run that don’t even require a GPU, which is where most of that power was needed.

      The one wrong thing with your take is the lack of vision as to how technology changes and evolves over time. We had computers the size of rooms to run processes that our mobile phones can now run hundreds of times more efficiently and powerfully.

      Your other points are valid, people don’t realize how AI will change the world. They don’t realize how soon people will stop thinking for themselves in a lot of ways. We already see how critical thinking drops with lots of AI usage, and big tech is only thinking of how to replace their staff with it and keep consumers engaged with it.

      • SoftestSapphic@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        3
        ·
        edit-2
        9 hours ago

        You are demonstrating in this comment that you don’t really understand the tech.

        The “efficient” models already spent the water and energy to train, these models are inferior to the ones that need data centers because you are stuck with a bot trained in 2020-2022 forever.

        They are less wasteful, but will become just as wasteful the second we want it to catch up again.

        • Sibyls@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          4
          ·
          9 hours ago

          You are misunderstanding the tech. That’s not how this works, models are trained often, did you think this was done only a few years ago? The fact that you called them bots says everything.

          You’re just hating to hate on something, without understanding the technology. The efficiency I’m referring to is the MoE architecture that only got popular within the last year. There are still new architectures being developed, not that you care about this topic but would prefer to blindly hate on what’s spewed from outdated and biased news sources.

          • SoftestSapphic@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            2
            ·
            8 hours ago

            Yeah nah

            Same shit people said in 2022

            In 3 more years you’ll be making the same excuses for the same shortcomings, because for you this isn’t about the tech, it’s about your ideology.

            • Sibyls@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              8 hours ago

              You make weird assumptions seemingly based on outdated ideas. I’ll let you be, perhaps you need some rest.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      11
      ·
      edit-2
      13 hours ago

      It was also inefficient for a computer to play chess in 1980. Imagine using a hundred watts of energy and a machine that costed thousands of dollars and not being able to beat an average club player.

      Now a phone will cream the world’s best in chess and even go

      Give it twenty years to become good. It will certainly do more stuff with smaller more efficient models as it improves

      • Kay Ohtie@pawb.social
        link
        fedilink
        English
        arrow-up
        14
        ·
        12 hours ago

        If you want to argue in favor of your slop machine, you’re going to have to stop making false equivalences, or at least understand how its false. You can’t make ground on things that are just tangential.

        A computer in 1980 was still a computer, not a chess machine. It did general purpose processing where it followed whatever you guided it to. Neural models don’t do that though; they’re each highly specialized and take a long time to train. And the issue isn’t with neural models in general.

        The issue is neural models that are being purported to do things they functionally cannot, because it’s not how models work. Computing is complex, code is complex, adding new functionality that operates off of fixed inputs alone is hard. And now we’re supposed to buy that something that creates word relationship vector maps is supposed to create new?

        For code generation, it’s the equivalent of copying and pasting from Stack Overflow with a find/replace, or just copying multiple projects together. It isn’t something new, it’s kitbashing at best, and that’s assuming it all works flawlessly.

        With art, it’s taking away creation from people and jobs. I like that you ignored literally every point raised except for the one you could dance around with a tangent. But all these CEOs are like “no one likes creating art or music”. And no, THEY just don’t want to spend time creating themselves nor pay someone who does enjoy it. I love playing with 3D modeling and learning how to make the changes I want consistently, I like learning more about painting when texturing models and taking time to create intentional masks. I like taking time when I’m baking things to learn and create, otherwise I could just go buy a box mix of Duncan Hines and go for something that’s fine but not where I can make things when I take time to learn.

        And I love learning guitar. I love feeling that slow growth of skill as I find I can play cleaner the more I do. And when I can close my eyes and strum a song, there’s a tremendous feeling from making this beautiful instrument sing like that.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 hours ago

          Stockfish can’t play Go. The resources you spent making the chess program didn’t port over.

          In the same way you can use a processor to run a completely different program, you can use a GPU to run a completely different model.

          So if current models can’t do it, you’d be foolish to bet against future models in twenty years not being able to do it.

      • outhouseperilous@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        13
        ·
        13 hours ago

        Not the same. The underlying tech of llm’s has mqssively diminishing returns. You can akready see it, could see it a year ago if you looked. Both in computibg power and required data, and we do jot have enough data, literally have nit created in all of history.

        This is not “ai”, it’s a profoubsly wasteful capitalist party trick.

        Please get off the slop and re-build your brain.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 hours ago

          That’s the argument Paul Krugman used to justify his opinion that the internet peaked in 1998.

          You still need to wait for AI to crash and a bunch of research to happen and for the next wave to come. You can’t judge the internet by the dot com crash, it became much more impactful later on

              • outhouseperilous@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                4
                ·
                8 hours ago

                One of the major contributors to early versions. Then they did the math and figured out it was a dead end. Yes.

                Also one of the other contributors (weizenbaum i think?) pointed out that not only was it stupid, it was dabgeroys and made people deranged fanatical devotees impervious to reason, who would discard their entire intellect and education to cult about this shit, in a madness no logic could breach. And that’s just from eliza.

      • Deflated0ne@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        13 hours ago

        Show me the chess machine that caused rolling brown outs and polluted the air and water of a whole city.

        I’ll wait.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          10 hours ago

          Servers have been eating up a significant portion of electricity for years before AI. It’s whether we get something useful out of it that matters

          • Deflated0ne@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            That’s the hangup isn’t it? It produces nothing of value. Stolen art. Bad code. Even more frustrating phone experiences. Oh and millions of lost jobs and ruined lives.

            It’s the most american way possible that they could have set trillions of dollars on fire short of carpet bombing poor brown people somewhere.

          • CorvidCawder@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            6 hours ago

            Not even remotely close to this scale… At most you could compare the energy usage to the miners in the crypto craze, but I’m pretty sure that even that is just a tiny fraction of what’s going on right now.

            • Deflated0ne@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              Crypto miners wish they could be this inefficient. No literally they do. They’re the “rolling coal” mfers of the internet.

      • Dangerhart@lemmy.zip
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 hours ago

        It seems like you are implying that models will follow Moore’s law, but as someone working on “agents” I don’t see that happening. There is a limitation with how much can be encoded and still produce things that look like coherent responses. Where we would get reliable exponential amounts of training data is another issue. We may get “ai” but it isn’t going to be based on llms

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          10 hours ago

          You can’t predict how the next twenty years of research improves on the current techniques because we haven’t done the research.

          Is it going to be specialized agents? Because you don’t need a lot of data to do one task well. Or maybe it’s a lot of data but you keep getting more of it (robot movement? stock market data?)

      • jaykrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 hours ago

        Twenty years is a very long time, also “good” is relative. I give it about 2-3 years until we can run a model as powerful as Opus 4.1 on a laptop.

        • iopq@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 hours ago

          There will inevitably be a crash in AI and people still forget about it. Then some people will work on innovative techniques and make breakthroughs without fanfare