• CallMeAnAI@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    16
    ·
    3 days ago

    Jesus Christ 🤦‍♂️

    MS puts out an “LLMs suck at this” and y’all lose your mind.

      • vivendi@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 days ago

        One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn’t overflow and become incoherent immediately [1], it is extremely useful.

        But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake

        Some of y’all would’ve lit the fire under Jan Hus if you lived in the 15th century

        [1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it’s not really much of a concern.

      • CallMeAnAI@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        8
        ·
        3 days ago

        Because it’s good at other things like creating tables and fully utilizing all features that users typically aren’t informed or practice on. Being able to describe a table and how you want to layout data for the best results is helpful.