LOOK MAA I AM ON FRONT PAGE

  • cactopuses@lemm.ee
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    2
    ·
    8 hours ago

    While a fair idea there are two issues with that even still - Hallucinations and the cost of running the models.

    Unfortunately, it take significant compute resources to perform even simple responses, and these responses can be totally made up, but still made to look completely real. It’s gotten much better sure, but blindly trusting these things (Which many people do) can have serious consequences.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      5 hours ago

      Hallucinations and the cost of running the models.

      So, inaccurate information in books is nothing new. Agreed that the rate of hallucinations needs to decline, a lot, but there has always been a need for a veracity filter - just because it comes from “a book” or “the TV” has never been an indication of absolute truth, even though many people stop there and assume it is. In other words: blind trust is not a new problem.

      The cost of running the models is an interesting one - how does it compare with publication on paper to ship globally to store in environmentally controlled libraries which require individuals to physically travel to/from the libraries to access the information? What’s the price of the resulting increased ignorance of the general population due to the high cost of information access?

      What good is a bunch of knowledge stuck behind a search engine when people don’t know how to access it, or access it efficiently?

      Granted, search engines already take us 95% (IMO) of the way from paper libraries to what AI is almost succeeding in being today, but ease of access of information has tremendous value - and developing ways to easily access the information available on the internet is a very valuable endeavor.

      Personally, I feel more emphasis should be put on establishing the veracity of the information before we go making all the garbage easier to find.

      I also worry that “easy access” to automated interpretation services is going to lead to a bunch of information encoded in languages that most people don’t know because they’re dependent on machines to do the translation for them. As an example: shiny new computer language comes out but software developer is too lazy to learn it, developer uses AI to write code in the new language instead…