• 0 Posts
  • 2.01K Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle
  • Buddahriffic@lemmy.worldtoMicroblog Memes@lemmy.worldBMW
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 hours ago

    Frankly, they shouldn’t be driving at all if they need something like that to drive safely day to day. The bar for being allowed to drive is way too low IMO (and I thought this before seeing you say that and realizing you might be right about that).

    My thought after hearing about a lane assist that will fight you if you don’t signal is when I leave my lane without signaling, it means I really need to be out of that lane and not fighting some safety system that works on the assumption that unusual things don’t happen. Even during usual situations, it just sounds like a feature that encourages paying less attention.

    Makes me glad to have a car where the most it does to “help” is traction control. Hell, even the ABS seems to be tuned for pavement rather than snow/ice and I had to learn to not trust it to help stop in those conditions and instead pump the breaks.


  • There’s a lot of space between “just let them carry on with whatever” and “beat them like I expected to be”. Not to mention, “getting my ass beat by my parents” might not mean literally getting beat, but can be a metaphor for any kind of discipline (though I can see how it can fall into the uncanny valley since there were and are parents that would literally beat asses).


  • Yeah, part of it is to teach that they won’t get their way by annoying you into giving in. Helps in my case that I can be a stubborn fuck, too. It means I have to choose my battles because I don’t want to back myself into a situation where I make a choice, realize it’s not the best one, but feel like I have to stand my ground to combat whining. Luckily we’re past the point of tantrums and she’s old enough that I can explain my reasoning in cases where I say one thing at first but then later change my mind.

    But there’s two other parts imo. One is teaching them the right way to express what they want (as well as when stating what they want might be rude or out of line, like if it’s in response to getting a gift that isn’t their top choice). And the other is being open and honest about the why. I only use “because I said so” or some equivalent to deal with the endless chain of "why?"s (though I’ve found deflecting it back at her is also effective, like “why do you think it is?”).


  • I can’t understand how such an obviously stupid approach to rasing kids even got off the ground to the point of general awareness. Any intelligent adult should be able to see how learning to take a “no” is an essential part of growing up. Same with dealing with negative emotions in general, which I understand the whole “never say no” thing is trying to avoid.

    My daughter was taught how to take a no at a young age. It was a bit rough the first few times, but she quickly learned to take them in stride.














  • It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.

    Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.

    There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.


  • Buddahriffic@lemmy.worldtoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    12 days ago

    I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.

    I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.

    I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).


  • Buddahriffic@lemmy.worldtoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    13 days ago

    Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.

    LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

    All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.

    Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.