• rustydrd@sh.itjust.works
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    9 hours ago

    Lots of AI is technologically interesting and has tons of potential, but this kind of chatbot and image/video generation stuff we got now is just dumb.

    • MrMcGasion@lemmy.world
      link
      fedilink
      arrow-up
      15
      arrow-down
      3
      ·
      edit-2
      8 hours ago

      I firmly believe we won’t get most of the interesting, “good” AI until after this current AI bubble bursts and goes down in flames. Once AI hardware is cheap interesting people will use it to make cool things. But right now, the big players in the space are drowning out anyone who might do real AI work that has potential, by throwing more and more hardware and money at LLMs and generative AI models because they don’t understand the technology and see it as a way to get rich and powerful quickly.

      • NewDayRocks@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        5 hours ago

        AI is good and cheap now because businesses are funding it at a loss, so not sure what you mean here.

        The problem is that it’s cheap, so that anyone can make whatever they want and most people make low quality slop, hence why it’s not “good” in your eyes.

        Making a cheap or efficient AI doesn’t help the end user in any way.

        • SolarBoy@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 hour ago

          It appears good and cheap. But it’s actually burning money, energy and water like crazy. I think somebody mentioned to generate a 10 second video, it’s the equivalent in energy consumption as driving a bike for 100km.

          It’s not sustainable. I think the thing the person above you is referring to is if we ever manage to make LLMs and such which can be run locally on a phone or laptop with good results. That would make people experiment and try out things themselves, instead of being dependent on paying monthly for some services that can change anytime.

          • NewDayRocks@lemmy.dbzer0.com
            link
            fedilink
            arrow-up
            1
            ·
            4 minutes ago

            You and OP are misunderstanding what is meant by good and cheap.

            It’s not cheap from a resource perspective like you say. However that is irrelevant for the end user. It’s “cheap” already because it is either free or costs considerably less for the user than the cost of the resources used. OpenAI or Meta or Twitter are paying the cost. You do not need to pay for a monthly subscription to use AI.

            So the quality of the content created is not limited by cost.

            If the AI bubble popped, this won’t improve AI quality.

  • Brotha_Jaufrey@lemmy.world
    link
    fedilink
    arrow-up
    15
    ·
    10 hours ago

    Not all AI is bad. But there’s enough widespread AI that’s helping cut jobs, spreading misinformation (or in some cases, actual propaganda), creating deepfakes, etc, that in many people’s eyes, it paints a bad picture of AI overall. I also don’t trust AI because it’s almost exclusively owned by far right billionaires.

    • DeathByBigSad@sh.itjust.works
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      Machines replacing people is not a bad thing if they can actually perform the same or better; the solution to unemployment would be Universal Basic Income.

      • petrol_sniff_king@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        3
        ·
        5 hours ago

        For labor people don’t like doing, sure. I can’t imagine replacing a friend of mine with a conversation machine that performs the same or better, though.

    • Blue_Morpho@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      3 hours ago

      Ai is literally making people dumber:

      And books destroyed everyone’s memory. People used to have fantastic memories.

      They are a massive privacy risk:

      No different than the rest of cloud tech. Run your AI local like your other self hosting.

      Are being used to push fascist ideologies into every aspect of the internet:

      Hitler used radio to push fascism into every home. It’s not the medium, it’s the message.

      And they are a massive environmental disaster:

      AI uses a GPU just like gaming uses a GPU. Building a new AI model uses the same energy that Rockstar spent developing GTA5. But it’s easier to point at a centralized data center polluting the environment than thousands of game developers spread across multiple offices creating even more pollution.

      Stop being a corporate apologist

      Run your own AI! Complaining about “corporate AI” is like complaining about corporate email. Host it yourself.

    • lmmarsano@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      18
      ·
      15 hours ago

      Do you really need to have a list of why people are sick of LLM and Ai slop?

      With the number of times that refrain is regurgitated here ad nauseum, need is an odd way to put it. Sick of it might fit sentiments better. Done with this & not giving a shit is another.

    • AnonomousWolf@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      41
      ·
      edit-2
      15 hours ago

      If you ever take a flight for holiday, or even drive long distance and cry about AI being bad for the environment then you’re a hypocrite.

      Same goes for if you eat beef, or having a really powerful gaming rig that you use a lot.

      There are plenty of valid reasons AI is bad, but the argument for the environment seems weak, and most people using it are probably hypocrites. It’s barely a drop in the bucket compared to other things

      • Jankatarch@lemmy.world
        link
        fedilink
        arrow-up
        15
        ·
        10 hours ago

        Texas has just asked residents to take less showers while datacenters made specifically for LLM training continue operating.

        This is more like feeling bad for not using a paper straw while local factory dumps all their oil change into the community river.

      • BroBot9000@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        8
        ·
        edit-2
        15 hours ago

        Ahh so are you going to acknowledge the privacy invasion and brain rotting cause by Ai or are you just going to focus on dismissing the environmental concerns? Cause I linked more than just the environmental impacts.

        • Draces@lemmy.world
          link
          fedilink
          arrow-up
          13
          arrow-down
          10
          ·
          13 hours ago

          Uh dismissing that concern seems like valid point? Do people have to comprehensively discredit the whole list to reply?

      • Sl00k@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        9
        ·
        10 hours ago

        This echo chamber isn’t ready for this logical discussion yet unfortunately lol

        • CXORA@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          7 hours ago

          When someone disagrees with me - echo chamber.

          When someone agrees with me - logical discussion.

      • Randomgal@lemmy.ca
        link
        fedilink
        arrow-up
        13
        arrow-down
        16
        ·
        13 hours ago

        You’re getting downvoted for speaking the truth to an echo chamber my guy.

        • Barrymore@sh.itjust.works
          link
          fedilink
          arrow-up
          21
          arrow-down
          5
          ·
          12 hours ago

          But he isn’t speaking the truth. AI itself is a massive strain on the environment, without any true benefit. You are being fed hype and lies by con men. Data centers being built to supply AIs are using water and electricity at alarming rates, taking away the resources from actual people living nearby, and raising the cost of those utilities at the same time.

          https://www.realtor.com/advice/finance/ai-data-centers-homeowner-electric-bills-link/

          • Blue_Morpho@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            3 hours ago

            AI itself is a massive strain on the environment, without any true benefit

            Rockstar games developing GTA5: 6k employees 20 kwatt hours per square foot https://esource.bizenergyadvisor.com/article/large-offices 150 square feet per employee https://unspot.com/blog/how-much-office-space-do-we-need-per-employee/#%3A~%3Atext=The+needed+workspace+may+vary+in+accordance

            18,000,000,000 watt hours

            vs

            10,000,000,000 watt hours for ChatGPT training

            https://www.washington.edu/news/2023/07/27/how-much-energy-does-chatgpt-use/

            There are more 3d games developed each year than companies releasing new AI models.

          • Ek-Hou-Van-Braai@piefed.socialOP
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            9 hours ago

            The same can be said for taking flights to go on holiday.

            Flying emits way exponentially more CO2 and supports the oil industry

          • Sl00k@programming.dev
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            7
            ·
            10 hours ago

            This is valid to all data centers serving all websites. Your take is a criticism of unregulated capitalism, not AI.

            Beef farming is a far far far more impactful discussion, yet here we are.

            • CXORA@aussie.zone
              link
              fedilink
              English
              arrow-up
              5
              ·
              7 hours ago

              Ai takes far more power to serve a single request than a website does though.

              And remember, AI requires those websites too, for training data.

              So it’s not just more power hungry, it also has thw initial power consumption added on top

          • Draces@lemmy.world
            link
            fedilink
            arrow-up
            5
            arrow-down
            7
            ·
            11 hours ago

            And your car or flight is a massive strain on the environment. I think you’re missing the point. There’s a way to use tools responsibly. We’ve taken the chains off and that’s obviously a problem but the AI hate here is irrational

          • absentbird@lemmy.world
            link
            fedilink
            arrow-up
            4
            arrow-down
            5
            ·
            12 hours ago

            The problem is the companies building the data centers; they would be just as happy to waste the water and resources mining crypto or hosting cloud gaming, if not for AI it would be something else.

            In China they’re able to run DeepSeek without any water waste, because they cool the data centers with the ocean. DeepSeek also uses a fraction of the energy per query and is investing in solar and other renewables for energy.

            AI is certainly an environmental issue, but it’s only the most recent head of the big tech hydra.

          • Honytawk@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            arrow-down
            13
            ·
            12 hours ago

            AI uses 1/1000 the power of a microwave.

            Are you really sure you aren’t the one being fed lies by con men?

            • jimjam5@lemmy.world
              link
              fedilink
              arrow-up
              5
              ·
              edit-2
              6 hours ago

              What? Elon Musk’s xAI data center in Tennessee (when fully expanded & operational) will need 2 GW of energy. That’s as much as some entire cities use in a year.

            • Ace T'Ken@lemmy.ca
              link
              fedilink
              English
              arrow-up
              10
              ·
              edit-2
              9 hours ago

              Hi. I’m in charge of an IT firm that is been contracted to carry out one of these data centers somewhat unwillingly in our city. We are currently in the groundbreaking phase but I am looking at papers and power requirements. You are absolutely wrong on the power requirements unless you mean per query on a light load on an easy plan, but these will be handling millions if not billions of queries per day. Keeping in mind that a single user query can also be dozens, hundreds, or thousands of separate queries… Generating a single image is dramatically more than you are stating.

              Edit: I don’t think your statement addresses the amount of water it requires as well. There are serious concerns that our massive water reservoir and lake near where I live will not even be close to enough.

              Edit 2: Also, we were told to spec for at least 10x growth within the next 5 years which, unless there are massive gains in efficiency, I don’t think there are any places on the planet capable of meeting the needs of, even if the models become substantially more efficient.

          • Randomgal@lemmy.ca
            link
            fedilink
            arrow-up
            3
            arrow-down
            9
            ·
            12 hours ago

            Do you really think those data centers wouldn’t have been built if AI didn’t exist? Do you really think those municipalities would have turned down the same amount of money if it was for something else but equally destructive?

            What I’m hearing is you’re sick of municipal governance being in bed with big business. That you’re sick of big business being allowed to skirt environmental regulations.

            But sure. Keep screaming at AI. I’m sure the inanimate machine will feel really bad about it.

      • SugarCatDestroyer@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        8
        ·
        15 hours ago

        Hypocrisy can be called the primitive nature of man who chooses what is easier because he is designed that way. Human is like a cancerous tumor for the planet.

    • Ek-Hou-Van-Braai@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      9
      ·
      8 hours ago

      Because I used AI slop to create this shitpost lol. So naturally it would make mistake.

      There are other mistakes in the image too

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          6
          arrow-down
          4
          ·
          8 hours ago

          I mostly used it for irony, this is a shitpost after all and to make the orange arrow blue. But it messed some other things up along the way. Happy accidents

          • JabbaTheThott@lemmy.world
            link
            fedilink
            arrow-up
            2
            ·
            2 hours ago

            You know, I was really hoping people would just use the existing tools rather than AI. You used AI instead of the fucking paint bucket tool in ANY photo/drawing tool. Unbelievable

  • Deflated0ne@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    6
    ·
    16 hours ago

    The problem isn’t AI. The problem is Capitalism.

    The problem is always Capitalism.

    AI, Climate Change, rising fascism, all our problems are because of capitalism.

    • Ofiuco@piefed.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      12
      ·
      13 hours ago

      Wrong.
      The problem are humans, the same things that happen under capitalism can (and would) happen under any other system because humans are the ones who make these things happen or allow them to happen.

      • zeca@lemmy.ml
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        9 hours ago

        Problems would exist in any system, but not the same problems. Each system has its set of problems and challenges. Just look at history, problems change. Of course you can find analogies between problems, but their nature changes with our systems. Hunger, child mortality, pollution, having no free time, war, censorship, mass surveilence,… these are not constant through history. They happen more or less depending on the social systems in place, which vary constantly.

      • Eldritch@piefed.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        4
        ·
        13 hours ago

        While you aren’t wrong about human nature. I’d say you’re wrong about systems. How would the same thing happen under an anarchist system? Or under an actual communist (not Marxist-Leninist) system? Which account for human nature and focus to use it against itself.

        • Ace T'Ken@lemmy.ca
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 hours ago

          I’ll answer. Because some people see these systems as “good” regardless of political affiliation and want them furthered and see any cost as worth it. If an anarchist / communist sees these systems in a positive light, then they will absolutely try and use them at scale. These people absolutely exist and you could find many examples of them on Lemmy. Try DB0.

          • Eldritch@piefed.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 hours ago

            And the point of anarchist or actual communist systems is that such scale would be miniscule. Not massive national or unanswerable state scales.

            And yes, I’m an anarchist. I know DB0 and their instance and generally agree with their stance - because it would allow any one of us to effectively advocate against it if we desired to.

            There would be no tech broligarchy forcing things on anyone. They’d likely all be hanged long ago. And no one would miss them as they provide nothing of real value anyway.

            • Blue_Morpho@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              2 hours ago

              And the point of anarchist or actual communist systems is that such scale would be miniscule.

              Every community running their own AI would be even more wasteful than corporate centralization. It doesn’t matter what the system is if people want it.

            • Ace T'Ken@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 hours ago

              DB0 has a rather famous record of banning users who do not agree with AI. See !yepowertrippinbastards@lemmy.dbzer0.com or others for many threads complaining about it.

              You have no way of knowing what the scale would be as it’s all a thought experiment, however, so let’s play at that. if you see AI as a nearly universal good and want to encourage people to use it, why not incorporate it into things? Why not foist it into the state OS or whatever?

              Buuuuut… keep in mind that in previous Communist regimes (even if you disagree that they were “real” Communists), what the state says will apply. If the state is actively pro-AI, then by default, you are using it. Are you too good to use what your brothers and sisters have said is good and will definitely 100% save labour? Are you wasteful, Comrade? Why do you hate your country?

        • Ofiuco@piefed.ca
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          2
          ·
          12 hours ago

          It will happen regardless because we are not machines, we don’t follow theory, laws, instructions or whatever a system tells us to perfectly and without little changes here and there.

          • pebbles@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            12 hours ago

            I think you are underestimating how adaptable humans are. We absolutely conform to the systems that govern us, and they are NOT equally likely to produce bad outcomes.

            • JargonWagon@lemmy.world
              link
              fedilink
              arrow-up
              2
              arrow-down
              3
              ·
              10 hours ago

              Every system eventually ends with someone corrupted with power and greed wanting more. Putin and his oligrachs, Trump and his oligarchs… Xi isn’t great, but at least I haven’t heard news about the Uyghurs situation for a couple of years now. Hope things are better there nowadays and people aren’t going missing anymore just for speaking out against their government.

              • pebbles@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                I mean you’d have to be pretty smart to make the perfect system. Things failing isn’t proof that things can’t be better.

          • Eldritch@piefed.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            12 hours ago

            I see, so you don’t understand. Or simply refuse to engage with what was asked.

      • Tja@programming.dev
        link
        fedilink
        arrow-up
        3
        arrow-down
        4
        ·
        13 hours ago

        Can, would… and did. The list of environmental disasters in the Soviet is long and intense.

    • SugarCatDestroyer@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      11
      ·
      15 hours ago

      Rather, our problem is that we live in a world where the strongest will survive, and the strongest does not mean the smart… So alas we will always be in complete shit until we disappear.

      • chuckleslord@lemmy.world
        link
        fedilink
        arrow-up
        9
        arrow-down
        2
        ·
        14 hours ago

        That’s a pathetic, defeatist world view. Yeah, we’re victims of our circumstances, but we can make the world a better place than what we were raised in.

        • rumba@lemmy.zip
          link
          fedilink
          English
          arrow-up
          3
          ·
          13 hours ago

          You can try, and you should try. But some handful of generations ago, some assholes were in the right place at the right time and struck it rich. The ones that figured out generational wealth ended up with a disproportionate amount of power. The formula to use money to make more money was handed down, coddled, and protected to keep the rich and powerful in power. Even 100 Luigi’s wouldn’t even make the tiniest dent in the oligarch pyramid as others will just swoop in and consume their part.

          Any lifelong pursuit you have to make the world a better place than you were raised in will be wiped out with a scribble of black Sharpie on Ministry of Truth letterhead.

        • SugarCatDestroyer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          14 hours ago

          Well, you can believe that there is a chance, but there is none. It can only be created with sweat and blood. There are no easy ways, you know, and sometimes there are none at all, and sometimes even creating one seems like a miracle.

  • DeathByBigSad@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    arrow-down
    9
    ·
    5 hours ago

    Reminder that Lemmy is typically older and older people are usually more conservative about things. Sure, politcially, Lemmy leans left, but technologically, Lemmy is very conservative.

    Like for example, you see people on Lemmy say they’ll switch to a dumbphone, but that’s probably even more insecure, and they could’ve just used Lineage OS or something and it would be far more private.

    • dil@lemmy.zip
      link
      fedilink
      arrow-up
      12
      arrow-down
      1
      ·
      4 hours ago

      Why does being progressive and into tech mean being into ai all of a sudden? It has never meant that, its the conservative mfs pushing ai for a reason. You think any sort of powerful ai is about to be open source and usable by the ppl? Not expensive af to run with hella regulations behind who can use it?

      • dil@lemmy.zip
        link
        fedilink
        arrow-up
        8
        ·
        4 hours ago

        Im progressive and in to tech, I dont like fking generative ai tho its the worst part of tech to me, ai can be great in the medical field, It can be great as a supplementary tool but mfs dont use it that way. They just wanna sit on their asses and get rich off other ppls work

  • bridgeenjoyer@sh.itjust.works
    link
    fedilink
    arrow-up
    33
    arrow-down
    1
    ·
    16 hours ago

    Its true. We can have a nuanced view. Im just so fucking sick of the paid off media hyping this shit, and normies thinking its the best thing ever when they know NOTHING about it. And the absolute blind trust and corpo worship make me physically ill.

    • Honytawk@lemmy.zip
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      5
      ·
      12 hours ago

      Nuance is the thing.

      Thinking AI is the devil, will kill your grandma and shit in your shoes is equally as dumb as thinking AI is the solution to any problem, will take over the world and become our overlord.

      The truth is, like always, somewhere in between.

  • GregorGizeh@lemmy.zip
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    edit-2
    16 hours ago

    I don’t hate the concept as is, I hate how it is being marketed and shoved everywhere and into everything by sheer hype and the need for returns on the absurd amounts of money that were thrown at it.

    Companies use it to justify layoffs, create cheap vibed up products, delegate responsibilities to an absolutely not sentient or intelligent computer program. Not even mentioning the colossal amount of natural and financial resources being thrown down this drain.

    I read a great summary yesterday somewhere on here that essentially said “they took a type of computer model made to give answers to very specific questions it has been trained on, and then trained it on everything to make a generalist”. Except that doesn’t work, the broader the spectrum the model is covering the less accurate it will be.

    Identifying skin cancer? Perfect tool for the job.

    Giving drones the go ahead on an ambiguous target? Providing psychological care to people in distress? FUCK NO.

  • Mostly_Roblox@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    6
    ·
    18 hours ago

    I personally think of AI as a tool, what matters is how you use it. I like to think of it like a hammer. You could use a hammer to build a house, or you could smash someone’s skull in with it. But no one’s putting the hammer in jail.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      16
      arrow-down
      2
      ·
      17 hours ago

      Yeah, except it’s a tool that most people don’t know how to use but everyone can use, leading to environmental harm, a rapid loss of media literacy, and a huge increase in wealth inequality due to turmoil in the job market.

      So… It’s not a good tool for the average layperson to be using.

      • Randomgal@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        2
        ·
        13 hours ago

        Stop drinking the cool aid bro. Think of these statements critically for a second. Environmental harm? Sure. I hope you’re a vegan as well.

        Loss of media literacy: What does this even mean? People are doing things the easy way instead of the hard way? Yes, of course cutting corners is bad, but the problem is the conditions that lead to that person choosing to cut corners, the problem is the demand for maximum efficiency at any cost, for top numbers. AI is is making a problem evident, not causing it. If you’re home on a Friday after your second shift of the day, fuck yeah you want to do things easy and fast. Literacy what? Just let me watch something funny.

        Do you feel you’ve become more stupid? Do you think it’s possible? Why wouild other people, who are just like you, be these puppets to be brain washed by the evil machine?

        Ask yourself. How are people measuring intelligence? Creativity? How many people were in these studies and who funded them? If we had the measuring instrument needed to actually make categorizations like “People are losing intelligence.” Psychologists wouldn’t still be arguing over the exact definition of intelligence.

        Stop thinking of AI as a boogieman inside people’s heads. It is a machine. People using the machine to achieve a mundane goal, it doesn’t mean the machine created the goal or is responsible for everything wrong with humanity.

        Huge increase in inequality? What? Brother AI is a machine. It is the robber barons that are exploiting you and all of the working class to get obsenely rich. AI is the tool they’re using. AI can’t be held accountable. AI has no will. AI is a tool. It is people that are increasing inequality. It is the system held in place by these people that rewards exploitation and encourages to look at the evil machine instead. And don’t even use it, the less you know, the better. If you never engage with AI technology, you’ll believe everything I say about how evil it is.

        • petrol_sniff_king@lemmy.blahaj.zone
          link
          fedilink
          arrow-up
          1
          ·
          4 hours ago

          Literacy what? Just let me watch something funny.

          This is like the most pro-illiteracy thing I’ve ever read.

          Do you feel you’ve become more stupid?

          My muscles were weaker until I started training. As it turns out, the modern convenience that allows me to sit around all day doesn’t actually make me stronger by itself.

          It is people that are increasing inequality.

          Yes, what if the billionaires simply chose not to, hm? Have I ever thought of that? Probably not, I’m very stupid.

    • oppy1984@lemdro.id
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      10
      ·
      edit-2
      16 hours ago

      Seriously, the AI hate gets old fast. Like you said it’s a tool, gey get over it people.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      11
      ·
      edit-2
      13 hours ago

      “Guns don’t kill people, people kill people”

      Edit:

      Controversial reply, apparently, but this is literally part of the script to a Philosophy Tube video (relevant part is 8:40 - 20:10)

      We sometimes think that technology is essentially neutral. It can have good or bad effects, and it might be really important who controls it. But a tool, many people like to think, is just a tool. “Guns don’t kill people, people do.” But some philosophers have argued that technology can have values built into it that we may not realise.

      The philosopher Don Idhe says tech can open or close possibilities. It’s not just about its function or who controls it. He says technology can provide a framework for action.

      Martin Heidegger was a student of Husserl’s, and he wrote about the ways that we experience the world when we use a piece of technology. His most famous example was a hammer. He said when you use one you don’t even think about the hammer. You focus on the nail. The hammer almost disappears in your experience. And you just focus on the task that needs to be performed.

      Another example might be a keyboard. Once you get proficient at typing, you almost stop experiencing the keyboard. Instead, your primary experience is just of the words that you’re typing on the screen. It’s only when it breaks or it doesn’t do what we want it to do, that it really becomes visible as a piece of technology. The rest of the time it’s just the medium through which we experience the world.

      Heidegger talks about technology withdrawing from our attention. Others say that technology becomes transparent. We don’t experience it. We experience the world through it. Heidegger says that technology comes with its own way of seeing.

      Now some of you are looking at me like “Bull sh*t. A person using a hammer is just a person using a hammer!” But there might actually be some evidence from neurology to support this.

      If you give a monkey a rake that it has to use to reach a piece of food, then the neurons in its brain that fire when there’s a visual stimulus near its hand start firing when there’s a stimulus near the end of the rake, too! The monkey’s brain extends its sense of the monkey body to include the tool!

      And now here’s the final step. The philosopher Bruno Latour says that when this happens, when the technology becomes transparent enough to get incorporated into our sense of self and our experience of the world, a new compound entity is formed.

      A person using a hammer is actually a new subject with its own way of seeing - ‘hammerman.’ That’s how technology provides a framework for action and being. Rake + monkey = rakemonkey. Makeup + girl is makeupgirl, and makeupgirl experiences the world differently, has a different kind of subjectivity because the tech lends us its way of seeing.

      You think guns don’t kill people, people do? Well, gun + man creates a new entity with new possibilities for experience and action - gunman!

      So if we’re onto something here with this idea that tech can withdraw from our attention and in so doing create new subjects with new ways of seeing, then it makes sense to ask when a new piece of technology comes along, what kind of people will this turn us into.

      I thought that we were pretty solidly past the idea that anything is “just a tool” after seeing Twitler scramble Grok’s innards to advance his personal politics.

      Like, if you still had any lingering belief that AI is “like a hammer”, that really should’ve extinguished it.

      But I guess some people see that as an aberrant misuse of AI, and not an indication that all AI has an agenda baked into it, even if it’s more subtle.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          14 hours ago

          as an aussie, yeah, then you should stop people from having guns

          i honestly wouldn’t be surprised if the total number of gun deaths in australia since we banned guns (1996) was less than the number of gun deaths in the US THIS WEEK

          the reason is irrelevant: the cause is obvious… and id have bought the “to stop a tyrannical government” argument a few years ago, but ffs there’s all the kids dying in school and none of the stop the tyrant, so maybe that’s a fucking awful argument and we have it right down under

          • Kintarian@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            9 hours ago

            I’ve never understood how a redneck prepper thinks he’s going to protect himself with a bunch of guns from a government that has millions of soldiers, tanks, machine guns, sidewinder misses and nuclear weapons.

      • Grimy@lemmy.world
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        16 hours ago

        Bad faith comparison.

        The reason we can argue for banning guns and not hammers is specifically because guns are meant to hurt people. That’s literally their only use. Hammers have a variety of uses and hurting people is definitely not the primary one.

        AI is a tool, not a weapon. This is kind of melodramatic.

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            15 hours ago

            then you have little understanding of how genai works… the social impact of genai is horrific, but to argue the tool is wholly bad conveys a complete or purposeful misunderstanding of context

            • considerealization@lemmy.ca
              link
              fedilink
              arrow-up
              3
              arrow-down
              1
              ·
              13 hours ago

              I’m not an expert in AI systems, but here is my current thinkging:

              Insofar as ‘GenAI’ is defined as

              AI systems that can generate new content, including text, images, audio, and video, in response to prompts or inputs

              I think this is genuinely bad tech. In my analysis, there are no good use cases for automating this kind of creative activity in the way that the current technology works. I do not mean that all machine assisted generation of content is bad, but just the current tech we are calling GenAI, which is of the nature of “stochastic parrots”.

              I do not think every application of ML is trash. E.g., AI systems like AlphaFold are clearly valuable and important, and in general the application of deep learning to solve particular problems in limited domains is valuable

              Also, if we first have a genuinely sapient AI, then it’s creation would be of a different kind, and I think it would not be inherently degenerative. But that is not the technology under discussion. Applications of symbolic AI to assist in exploring problem spaces, or ML to solve classification problems also seems genuinely useful.

              But, indeed, all the current tech that falls under GenAI is genuinely bad, IMO.

              • Pup Biru@aussie.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                4 hours ago

                things like the “patch x out of an image” allows people to express themselves with their own creative works more fully

                text-based genai has myriad purposes that don’t involve wholesale generation of entirely new creative works:

                using it as a natural language parser in low-stakes situation (think like you’re browsing a webpage and want to add an event to the calendar but it just has a paragraph of text that says “next wednesday at xyz”)

                the generative part makes it generically more useful that specialist models (and certainly less accurate most of the time), and people can use them to build novel things on top of rather than be limited to the original intent of the model creator

                everything genai should be used for should be low-stakes: things that humans can check quickly, or doesn’t matter if it’s wrong… because it will be wrong some of the time

          • Ifera@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            4
            ·
            15 hours ago

            GenAI is a great tool for devouring text and making practice questions, study guides and summarize, it has been used as a marvelous tool for education and research. Hell, if set properly, you can get it to give you the references and markers on your original data for where to find the answers to the questions on the study guide it made you.

            It is also really good for translation and simplification of complex text. It has its uses.

            But the oversimplification and massive broad specs LLMs have taken, plus lack of proper training for the users, are part of the problem Capitalism is capitalizing on. They don’t care for the consumer’s best interest, they just care for a few extra pennies, even if those are coated in the blood of the innocent. But a lot of people just foam at the mouth when they hear “Ai”.

            • considerealization@lemmy.ca
              link
              fedilink
              arrow-up
              2
              arrow-down
              2
              ·
              13 hours ago

              Those are not valuable use cases. “Devouring text” and generating images is not something that benefits from automation. Nor is summarization of text. These do not add value to human life and they don’t improve productivity. They are a complete red herring.

              • Ifera@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                12 hours ago

                Who talked about image generation? That one is pretty much useless, for anything that needs to be generated on the fly like that, a stick figure would do.

                Devouring text like that, has been instrumental in learning for my students, especially for the ones who have English as a Second Language(ESL), so its usability in teaching would be interesting to discuss.

                Do I think general open LLMs are the future? Fuck no. Do I think they are useless and unjustifiable? Neither. I think, at their current state, they are a brilliant beta test on the dangers and virtues of large language models and how they interact with the human psyche, and how they can help bridge the gap in understanding, and how they can help minorities, especially immigrants and other oppressed groups(Hence why I advocated for providing a class on how to use it appropriately for my ESL students) bridge gaps in understanding, help them realize their potential, and have a better future.

                However, we need to solve or at least reduce the grip Capitalism has on that technology. As long as it is fueled by Capitalism, enshitification, dark patterns and many other evils will strip it of its virtues, and sell them for parts.

      • Ignotum@lemmy.world
        link
        fedilink
        arrow-up
        10
        arrow-down
        3
        ·
        17 hours ago

        My skull-crushing hammer that is made to crush skulls and nothing else doesn’t crush skulls, people crush skulls
        In fact, if more people had skull-crushing hammers in their homes, i’m sure that would lead to a reduction in the number of skull-crushings, the only thing that can stop a bad guy with a skull-crushing hammer, is a good guy with a skull-crushing hammer

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          5
          ·
          14 hours ago

          you’re absolutely right!

          the ban on guns in australia has been disastrous! the number of good guys with guns has dropped dramatically and … well, so has the number of bad guys … but that’s a mirage! ignore our near 0 gun deaths… that’s a statistical anomaly!

  • Truscape@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    55
    arrow-down
    11
    ·
    edit-2
    19 hours ago

    Distributed platform owned by no one founded by people who support individual control of data and content access

    Majority of users are proponents of owning what one makes and supporting those who create art and entertainment

    AI industry shits on above comments by harvesting private data and creative work without consent or compensation, along with being a money, energy, and attention tar pit

    Buddy, do you know what you’re here for?

    EDIT: removed bot accusation, forgot to check user history

    • dactylotheca@suppo.fi
      link
      fedilink
      English
      arrow-up
      37
      arrow-down
      6
      ·
      19 hours ago

      Or are you yet another bot lost in the shuffle?

      Yes, good job, anybody with opinions you don’t like is a bot.

      It’s not like this was even a pro-AI post rather than just pointing out that even the most facile “ai bad, applause please” stuff will get massively upvoted

        • dactylotheca@suppo.fi
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          3
          ·
          19 hours ago

          HaVe YoU ConSiDeReD thE PoSSiBiLiTY that I’m not pro-AI and I understand the downsides, and can still point out that people flock like lemmings (*badum tss*) to any “AI bad” post regardless of whether it’s actually good or not?

          • Doll_Tow_Jet-ski@fedia.io
            link
            fedilink
            arrow-up
            2
            ·
            14 hours ago

            Ok, so your point is: Look! People massively agree with an idea that makes sense and it’s true.

            Color me surprised…

          • grrgyle@slrpnk.net
            link
            fedilink
            arrow-up
            3
            arrow-down
            2
            ·
            18 hours ago

            Why would a post need to be good? It just needs a good point. Like this post is good enough, even if I don’t agree that we have enough facile ai = posts.

            Depends on the community, but for most of them pointing out ways that ai is bad is probably relevant, welcome, and typical.

        • Voyajer@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          18 hours ago

          Why would you lend and credence to the weakest appeal to the masses presented on the site?

      • Truscape@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        7
        arrow-down
        6
        ·
        19 hours ago

        Yeah, I guess that was a bit too far, posted before I checked the user history or really gave it time to sit in my head.

        Still, this kind of meme is usually used to imply that the comment is just a trend rather than a legitimate statement.

  • RushLana@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    36
    arrow-down
    5
    ·
    19 hours ago

    How people dare not like the automatic bullshit machine pushed down their troat…

    Seriously, genrative AI acomplishment are :

    • Making mass spam easier
    • Burning the planet
    • Making people lose their job and not even being a decent solution
    • Make all search engine and information sources worse
    • Creating an economic bubble that will fuckup the economy even harder
    • Easing mass surveillance and weakening privacy everywhere
    • mechoman444@lemmy.world
      link
      fedilink
      arrow-up
      9
      arrow-down
      4
      ·
      19 hours ago

      Yes. AI can be used for spam, job cuts, and creepy surveillance, no argument there, but pretending it’s nothing more than a corporate scam machine is just lazy cynicism. This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors, giving deaf people real-time conversations through instant transcription, translating entire languages on the fly, mapping wildfire and flood zones so first responders know exactly where to go, accelerating scientific breakthroughs from climate modeling to space exploration, and cutting out the kind of tedious grunt work that wastes millions of human hours a day. The problem isn’t that AI exists, it’s that a lot of powerful people use it selfishly and irresponsibly. Blaming the tech instead of demanding better governance is like blaming the printing press for bad propaganda.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        17 hours ago

        This same “automatic BS” is helping discover life-saving drugs, diagnosing cancers earlier than some doctors

        Not the same kind of AI. At all. Generative AI vendors love this motte-and-bailey.

      • atopi@piefed.blahaj.zone
        link
        fedilink
        English
        arrow-up
        4
        ·
        16 hours ago

        Arent those different types of AI?

        I dont think anyone hating AI is referring to the code that makes enemies move, or sort things into categories

    • Ek-Hou-Van-Braai@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      11
      ·
      edit-2
      19 hours ago

      One could have said many of the same thigs about a lot of new technologies.

      The Internet, Nuclear, Rockets, Airplanes etc.

      Any new disruptive technology comes with drawbacks and can be used for evil.

      But that doesn’t mean it’s all bad, or that it doesn’t have its uses.

      • RushLana@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        2
        ·
        18 hours ago

        Give me one real world use that is worth the downside.

        As dev I can already tell you it’s not coding or around code. Project get spamed with low quality nonsensical bug repport, ai generated code rarely work and doesn’t integrate well ( on top on pushing all the work on the reviewer wich is already the hardest part of coding ) and ai written documentation is ridled with errors and is not legible.

        And even if ai was remotly good at something it still the equivalent of a microwave trying to replace the entire restaurant kitchen.

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          2
          ·
          edit-2
          16 hours ago

          I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.

          There are MANY examples of LLM’s being useful, it has its drawbacks just like any big technology, but saying it has no uses that aren’t worth it, is ridiculous.

          • RushLana@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            12 hours ago

            But we could do vocal assistants well before LLMs (look at siri) and without setting everything on fire.

            And seriously, I asked for something that’s worth all the down side and you bring up clippy 2.0 ???

            Where are the MANY exemples ? why are LLMs/genAI company burning money ? where are the companies making use of of the suposedly many uses ?

            I genuily want to understand.

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              11 hours ago

              You asked for one example, I gave you one.

              It’s not just voice, I can ask it complex questions and it can understand context and put on lights or close blinds based on that context.

              I find it very useful with no real drawbacks

          • PeriodicallyPedantic@lemmy.ca
            link
            fedilink
            arrow-up
            7
            arrow-down
            4
            ·
            edit-2
            16 hours ago

            That’s like saying “asbestos has some good uses, so we should just give every household a big pile of it without any training or PPE”

            Or “we know leaded gas harms people, but we think it has some good uses so we’re going to let everyone access it for basically free until someone eventually figures out what those uses might be”

            It doesn’t matter that it has some good uses and that later we went “oops, maybe let’s only give it to experts to use”. The harm has already been done by eager supporters, intentional or not.

            • Honytawk@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              12 hours ago

              No that is completely not what they are saying. Stop arguing strawmen.

          • Rampsquatch@sh.itjust.works
            link
            fedilink
            arrow-up
            2
            arrow-down
            2
            ·
            11 hours ago

            I can run a small LLM locally which I can talk to using voice to turn certain lights on and off, set reminders for me, play music etc.

            Neat trick, but it’s not worth the headache of set up when you can do all that by getting off your chair and pushing buttons. Hell, you don’t even have to get off your chair! A cellphone can do all that already, and you don’t even need voice commands to do it.

            Are you able to give any actual examples of a good use of an LLM?

            • Ek-Hou-Van-Braai@piefed.socialOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              10 hours ago

              Like it or not, that is an actual example.

              I can lay in my bed and turn off the lights without touching my phone, or turn on certain muisic without touching my phone.

              I could ask if I remembered to lock the front door etc.

              But okay, I’ll play your game, let’s pretend that doesn’t count.

              I can use my local AI to draft documents or emails speeding up the process a lot.

              Or I can used it to translate.

              • Rampsquatch@sh.itjust.works
                link
                fedilink
                arrow-up
                1
                arrow-down
                1
                ·
                10 hours ago

                If you want to live your life like that, go for that’s your choice. But I don’t think those applications are worth the cost of running an LLM. To be honest I find it frivolous.

                I’m not against LLMs as a concept, but the way they get shoved into everything without thought and without an “AI” free option is absurd. There are good reasons why people have a knee-jerk anti-AI reaction, even if they can’t articulate it themselves.

                • Ek-Hou-Van-Braai@piefed.socialOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  10 hours ago

                  It’s not expensive for me to run a local LLM, I just use the hardware I’m already using for gaming. Electricity is cheap and most people with a gaming PC probably use more electricity gaming than they would running their own LLM and asking it some questions.

                  I’m also against shoving AI in evening, and not making it Opt-In. I’m also worried about privacy and concentration of power etc.

                  But just outright saying LLMs are bad is rediculous.

                  And saying there is no good reason to use them is rediculous. Can we stop doing that.

      • PeriodicallyPedantic@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        16 hours ago

        Of those, only the internet was turned loose on an unsuspecting public, and they had decades of the faucet slowly being opened, to prepare.

        Can you imagine if after WW2, Werner Von Braun came to the USA and then just like… Gave every man woman and child a rocket, with no training? Good and evil wouldn’t even come into, it’d be chaos and destruction.

        Imagine if every household got a nuclear reactor to power it, but none of the people in the household got any training in how to care for it.

        It’s not a matter of good and evil, it’s a matter of harm.

        • Ek-Hou-Van-Braai@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          16 hours ago

          The Internet kind of was turned lose on an unsuspecting public. Social media has and still is causing a lot of harm.

          Did you really compare every household having a nuclear reactor with people having access to AI?

          How’s is that even remotely a fair comparison.

          To me the Internet being released on people and AI being released on people is more of a fair comparison.

          Both can do lots of harm and good, both will probably cost a lot of people their jobs etc.

      • deur@feddit.nl
        link
        fedilink
        arrow-up
        5
        arrow-down
        6
        ·
        18 hours ago

        Absolutely brain dead to compare the probability engine “AI” with no fundamental use beyond marketed value with a wide variety of truly useful innovations that did not involve marketing in their design.