• 5 Posts
  • 1.34K Comments
Joined 2 years ago
cake
Cake day: June 29th, 2023

help-circle
  • I think you think the electorate likes genocide, or at least you said so, so I don’t understand why you think accusing Joe of genocide would have lost an election.

    If the American people really didn’t want genocide they would elect candidates in primaries that were anti genocide (they didn’t) or they would vote for the candidate who wanted to just maintain the genocide as it is instead of accelerating it (they didn’t).

    people complaining about dems support of genocide while being silent about gop support (including “genocide Joe” chanters, 3rd party voters and non-voters), helped trump win and are responsible for the next 4 years of turbo genocide

    This isn’t hard to figure out, but I guess my brain isn’t broken by genocide apologia so I maybe I can’t understand.


  • But every time we said the dems were doing a genocide we were supposed to say that Trump would somehow be worse, but when you complain about us talking about the dems’ complicity in genocide, somehow you don’t have to mention that it’s a genocide? Because you didn’t do that.

    And despite the fact that you acknowledge the dems are complicit in genocide, you have no criticism of that becuase… something about democracy?

    Also if the electorate wants genocide that badly, then why is it bad if we put the genocide at their feet? Aren’t we helping them in that case? What are you upset about then?

    If the American people really didn’t want genocide they would elect candidates in primaries that were anti genocide (they didn’t) or they would vote for the candidate who wanted to just maintain the genocide as it is instead of accelerating it (they didn’t).

    You should say, “Yes, that’s my favourite genocider! A vote for Joe is a vote for genocide!” waves tiny plastic flag

    Your genocide apologia is breaking your brain.

    You could also learn the most basic facts about the US electoral system and understand that it is not democratic in the slightest, and people do not have a meaningful chance to vote for what they want.


  • So are you mad at the dems for making the genocide even worse by doing a genocide which helped them lose an election thus making the genocide worse?

    Why is it leftsts’ fault for telling the truth and not dems’ fault for making it true?

    Why do we have to be fair to the dems to agree that Trump’s genocide would be worse when the dems worked so hard to make “worse” virtually unimaginable?

    Why do we have to be fair to you by always saying Trump is worse but you don’t have to be fair to us by acknowledging that there is an actual genocide?

    Just because you have some mental gymnastics to explain why the dems’ genocide is somehow something we shouldn’t talk about doesn’t mean you’re not denying it.


  • If mentioning a genocide helped elect Trump, then doing the genocide helped Trump far more, so I don’t know why you’re not attacking the dems for that.

    The genocide charge wouldn’t carry any weight if it wasn’t true.

    Why is this genocide more important to you as a political football than as, you know, a genocide?

    You’re a genocide denier. You’re not denying it’s happening, you’re just denying it’s worth talking about, which is maybe worse.


  • Also apparently leftists have to temper our criticism of a genocide by mentioning that Trump is always somehow worse despite there being no evidence that it is materially any worse under him - that’s literally a counterfactual - but somehow this person gets to criticise us for mentioning a genocide without acknowledging that it is actually a genocide.

    It’s genocide denial, but they’re not denying it’s happening, they’re just denying that it’s worth talking about, which is maybe worse?



  • Calling a genocide a genocide should not be a partisan issue, and if you think we need to temper our discussion of genocide so that your preferred genocider can win a fucking election then you are a genocide denier.

    The way for the dems to differentiate themselves on this issue was to stop doing a genocide. They couldn’t do that, and so they enabled the worse option because they were just too horny for killing brown kids.



  • Facebook has had a strategy for a long time of monopolising the internet of countries that previously had very little internet. They essentially subsidise internet infrastructure and make that subsidy dependent on facebook being a central part of the network.

    So I’m not surprised to hear this. They obviously have found ways to inveigle themselves into key infrastructure in lots of places, even if they couldn’t build it in from the ground up.


  • They are obviously not in a reasoning place. I wouldn’t try logic, but they are susceptible to emotional manipulation. That’s how they fell for fascist propaganda in the first place. I would go for emotional truth.

    You have to judge if you’re safe to do this, but the next time they’re screaming about their absurd conspiracies, I would get a really sad look on my face, make direct eye contact, shake my head and say, “You’re so full of hate, and it’s really sad.” Just go full sincerity and show them how you see them.

    You can even set them up for it. Next time you try telling them some fact that they’re going to have this hateful response to, you can have this in your back pocket. You start with a simple fact, they respond with hate, you reply by telling them they’re being hateful.

    This is a modification of this strategy: https://youtu.be/tZzwO2B9b64

    Basically, don’t waste time arguing with fascists, just point out that they’re being assholes.

    Now, I say you need to judge how safe you feel doing this, because you might be surprised how ballistic they go. People stuck in abusive behaviour patterns hate nothing more than having that behaviour simply described to them. But when they do lose their shit, you can just describe it again.

    Sometimes they will just short-circuit and try to ignore you, or chastise you for speaking out of turn. The authoritarian personality is deeply connected to authoritarian parenting attitudes. Just persist over time, and maybe they will notice that they can’t stop you from reflecting their ugly selves back at them.

    I don’t know how old you are, how physically big you are, how prone they are to serious outbursts, but again, pay attention to your body and how much you’re feeling your flight instinct. Only if you feel safe.

    I do this with my parents sometimes. Like if my mum is fussing over my kids in some way that I think is invasive, - this was a sore point in my upbringing, she has no filter and no boundaries - I don’t engage on the facts of what she’s saying. I don’t tell her, “That tiny red spot you’ve noticed isn’t a big problem,” because that’s also being invasive and speaking on their behalf. I say “People don’t like to be scrutinised like that. If that’s a real problem they can tell us.”

    It’s honestly astonishing how fast this resolves some situations. That might have been a perennial argument about some fussy detail of my child’s appearance, all the time adding to the boundary-crossing scrutiny they experience, but shutting it down by pointing out her behaviour really makes her stop, and it communicates to my kids that they don’t have to put up with it. It teaches them that they have autonomy.

    It’s taken many years of demonstrating to her that I won’t be pushed around or intimidated for me to get to this point though. It’s not an easy road, and often the way to know the tactic is working is by watching how unpleasant someone gets when you do it, at least at first.

    Again: only if you feel safe.



  • We don’t have the same problems LLMs have.

    LLMs have zero fidelity. They have no - none - zero - model of the world to compare their output to.

    Humans have biases and problems in our thinking, sure, but we’re capable of at least making corrections and working with meaning in context. We can recognise our model of the world and how it relates to the things we are saying.

    LLMs cannot do that job, at all, and they won’t be able to until they have a model of the world. A model of the world would necessarily include themselves, which is self-awareness, which is AGI. That’s a meaning-understander. Developing a world model is the same problem as consciousness.

    What I’m saying is that you cannot develop fidelity at all without AGI, so no, LLMs don’t have the same problems we do. That is an entirely different class of problem.

    Some moon rockets fail, but they don’t have that in common with moon cannons. One of those can in theory achieve a moon landing and the other cannot, ever, in any iteration.


  • If all you’re saying is that neural networks could develop consciousness one day,, sure, and nothing I said contradicts that. Our brains are neural networks, so it stands to reason they could do what our brains can do. But the technical hurdles are huge.

    You need at least two things to get there:

    1. Enough computing power to support it.
    2. Insight into how consciousness is structured.

    1 is hard because a single brain alone is about as powerful as a significant chunk of worldwide computing, the gulf between our current power and what we would need is about… 100% of what we would need. We are so woefully under resourced for that. You also need to solve how to power the computers without cooking the planet, which is not something we’re even close to solving currently.

    2 means that we can’t just throw more power or training at the problem. Modern NN modules have an underlying theory that makes them work. They’re essentially statistical curve-fitting machines. We don’t currently have a good theoretical model that would allow us to structure the NN to create a consciousness. It’s not even on the horizon yet.

    Those are two enormous hurdles. I think saying modern NN design can create consciousness is like Jules Verne in 1867 saying we can get to the Moon with a cannon because of “what progress artillery science has made in the last few years”.

    Moon rockets are essentially artillery science in many ways, yes, but Jules Verne was still a century away in terms of supporting technologies, raw power, and essential insights into how to do it.



  • You’re definitely overselling how AI works and underselling how human brains work here, but there is a kernel of truth to what you’re saying.

    Neural networks are a biomimicry technology. They explicitly work by mimicking how our own neurons work, and surprise surprise, they create eerily humanlike responses.

    The thing is, LLMs don’t have anything close to reasoning the way human brains reason. We are actually capable of understanding and creating meaning, LLMs are not.

    So how are they human-like? Our brains are made up of many subsystems, each doing extremely focussed, specific tasks.

    We have so many, including sound recognition, speech recognition, language recognition. Then on the flipside we have language planning, then speech planning and motor centres dedicated to creating the speech sounds we’ve planned to make. The first three get sound into your brain and turn it into ideas, the last three take ideas and turn them into speech.

    We have made neural network versions of each of these systems, and even tied them together. An LLM is analogous to our brain’s language planning centre. That’s the part that decides how to put words in sequence.

    That’s why LLMs sound like us, they sequence words in a very similar way.

    However, each of these subsystems in our brains can loop-back on themselves to check the output. I can get my language planner to say “mary sat on the hill”, then loop that through my language recognition centre to see how my conscious brain likes it. My consciousness might notice that “the hill” is wrong, and request new words until it gets “a hill” which it believes is more fitting. It might even notice that “mary” is the wrong name, and look for others, it might cycle through martha, marge, maths, maple, may, yes, that one. Okay, “may sat on a hill”, then send that to the speech planning centres to eventually come out of my mouth.

    Your brain does this so much you generally don’t notice it happening.

    In the 80s there was a craze around so called “automatic writing”, which was essentially zoning out and just writing whatever popped into your head without editing. You’d get fragments of ideas and really strange things, often very emotionally charged, they seemed like they were coming from some mysterious place, maybe ghosts, demons, past lives, who knows? It was just our internal LLM being given free rein, but people got spooked into believing it was a real person, just like people think LLMs are people today.

    In reality we have no idea how to even start constructing a consciousness. It’s such a complex task and requires so much more linking and understanding than just a probabilistic connection between words. I wouldn’t be surprised if we were more than a century away from AGI.




  • I remember people talking about how the other smokers at work were all the cool people. And like, yeah, you spend several minutes several times a day hanging out outside with them, with no work and nothing to do but shoot the shit. Of course you like them better, you spend way more time with them.

    Also you can all bond over your common terrible life choices, what’s not to like?


  • Little Brother is a novel about a future dystopia where copyright laws have been allowed free rein to destroy people’s lives.

    It’s legislated that only “secure” hardware is allowed, but hardware is by definition fixed, which means that every time a vulnerability is found - which is inevitable - there is a hardware recall. So the black market is full of hardware which is proven to have jailbreaking vulnerabilities.

    Just a glimpse of where all this “trusted”, “secure” computing might lead.

    As a short video I saw many years ago explained on the concept: “trust always depends on mutuality, and they already decided not to trust you, so why should you trust them?”

    Edit: holy shit, it’s 15 years old, and “anti rrusted computing video dutch voice over” (turns out the guy is German actually) was enough to find it:

    https://www.lafkon.net/work/trustedcomputing/


  • Excrubulent@slrpnk.nettoLemmy Shitpost@lemmy.worldInfinite glitch
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    21 days ago

    Assume, he says, that the distribution of holdings in a given society is just according to some theory based on patterns or historical circumstances—e.g., the egalitarian theory, according to which only a strictly equal distribution of holdings is just.

    Okay well this is immediately a false premise because nobody seriously makes this argument. This is a strawman of the notion of egalitarianism.

    Also, we don’t need Wilt Chamberlain to create an unequal society, we just need money. It’s easy enough to show that simply keeping an account of wealth and then randomly shuffling money around creates the unequal distribution that we see in the real world:

    https://charlie-xiao.github.io/assets/pdf/projects/inequality-process-simulation.pdf

    And every actor there began with the impossible strictly eqalitarian beginning. No actor was privileged in any way nor had any merit whatsoever, but some wound up on top of an extremely unequal system.

    So Noszick just needs to look a little deeper at his own economic system to see the problem. There is no reason why we need to have a strict numerical accounting of wealth.