• balsoft@lemmy.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    7 hours ago

    They stopped doing research as it used to be for about 30 years.

    Was it really “like that” for any length of time? To me it seems like most people just believed whatever bullshit they saw on Facebook/Twitter/Insta/Reddit, otherwise it wouldn’t make sense to have so many bots pushing political content there. Before the internet it would be reading some random book/magazine you found, and before then it was hearsay from a relative.

    I think that the people who did the research will continue doing the research. It doesn’t matter if it’s thru a library, or a search engine, or Wikipedia sources, or AI sources. As long as you know how to read the actual source, compare it with other (probably contradictory) information, and synthesize a conclusion for yourself, you’ll be fine; if you didn’t want to do that it was always easy to stumble upon misinfo or disinfo anyways.

    One actual problem that AI might cause is if the actual scientists doing the research start using it without due diligence. People are definitely using LLMs to help them write/structure the papers ¹. This alone would probably be fine, but if they actually use it to “help” with methodology or other content… Then we would indeed be in trouble, given how confidently incorrect LLM output can be.

    • NoodlePoint@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      I think that the people who did the research will continue doing the research.

      Yes, but that number is getting smaller. Where I live, most households rarely have a full bookshelf, and instead nearly every member of the family has a “smart” phone; they’ll grab the chance to use anything that would be easier than spending hours going through a lot of books. I do sincerely hope methods of doing good research are still continually being taught, including the ability to distinguish good information from bad.