One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn’t overflow and become incoherent immediately [1], it is extremely useful.
But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake
Some of y’all would’ve lit the fire under Jan Hus if you lived in the 15th century
[1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it’s not really much of a concern.
Because it’s good at other things like creating tables and fully utilizing all features that users typically aren’t informed or practice on. Being able to describe a table and how you want to layout data for the best results is helpful.
i mean, if it sucks at this, why put it in lol
(rhetorical question, it’s to please investors, i know)
One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn’t overflow and become incoherent immediately [1], it is extremely useful.
But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake
Some of y’all would’ve lit the fire under Jan Hus if you lived in the 15th century
[1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it’s not really much of a concern.
Because it’s good at other things like creating tables and fully utilizing all features that users typically aren’t informed or practice on. Being able to describe a table and how you want to layout data for the best results is helpful.