>>> "blueberry".count('B') 0
Likewise, instruct the AI to break the word down into letters one per line first, and then they get it right more often. I think that’s the point the post is trying to make.
The letter counting issue is actually a fundamental problem of whole-word or subword-tokenization that’s had an obvious solution since ~2016, and i don’t get why commercial AI won’t implement a solution. Probably because it’s a lot of training code complexity (but not much compute) for solving a very small problem.
Are they trying to say that AI haters are equivalent to people who can’t code?
I suspect it’s more like “use the tool correctly or it will give bad results”.
Like, LLMs are a marvel of engineering! But they’re also completely unreliable for use cases where you need consistent, logical results. So maybe we shouldn’t use them in places where we need consistent, logical results. That makes them unsafe for use in most business.
There are like twelve genuine use cases and because of the cult of the LLM bro 9 of those are negated by weird blind faith. Two more are crimes against humanity.
Yeah, they should be used to generate text
Not even, they should be used to interpret/process natural language and maybe generate some filler things (smart defaults etc; a good use is generating titles for things). Translation it’s very good at too.
The more text an LLM has to generate, the worse it is, and the less it can base itself off of real text, the less it’ll do it correctly.
LLMs are basically optimized for making newspapers to put in the background of games, put some relevant stuff in the prompt and it’ll shit out text that’s sensible enough that players can skim things in the world and actually feel immersed.
the streamer dougdoug has made some really good uses for AI, in entertainment he is really creative in ways to use it. For example he had a dream where he was playing a rage game (a really difficult Mario maker world) and an AI would process everything he says and punish him if it decided he was being negative
or he would send every chat message in the last 30 sec to a LLM to get the top 3 answers to play family fued where he had to guess the top 3 things his chat says for a question like “what do you do in GTA that you wish you could do IRL” and it would summarize all the slightly different answers into 3 categories.
or simply as entertainment for his chat to play with as he plays a game, where a “joke bot” would rate the jokes people would tell it and decide if it is either funny or not funny and they’d get banned for a million seconds if the AI decided it was unfunny
he also just codes and makes stuff that’s not AI. He mapped Obama’s hand from a UN meeting to control Mario party against his chat and him trying to control it while only using a voice to text program
I’ll admit I once used an LLM to generate a comparison between the specs of three printers. It did a great job, but doing it myself is still faster and doesn’t make me feel dirty.
I understood this a a vibe coder trying python, the vibe-coding-way.
I’m not sure their intent, but I follow this guy on bluesky, he’s super pro open source and worked on a bunch of Google projects back in the day so I think if anything he might just be making fun of vibe coders
making fun of vibe coders
That was implied, this is 196 after all. :-)
There are zero 'B’s in ‘blueberry’
Have you never coded on windows file system?
i kno peeps will get mad but imma comment anyway!!! u cnt stop me!
i think dis is kindsa real-… *vine boom* 💥🤨🤨🤨
dis funi got so many - like - sub-layers 🧅
observe — — — the layers — — — (if u care)
- obvious reference to peeps dismissin llms for not bein able to answer spelling questions
- funi fake-thing, cuz programmin is actulli preddi useful - one has to kno whad its gud for tho
- secret funi: python is suuuuupr good at countin lettrs
"strawberry".count("r")
while current llms r not (largely due to their tokenization step, making them literally unable to count the letters, but they can still count occurances of words) - the funi could also be seen in a way, where the poster got into coding via llms, then realized thad this “coding” is actually not as easy as he thought…
anyway - i believe thad using a rule-based query-interpreation system (like siri or googles query-specific UI) with llms as a fallback gives much improved human-input-handlin-systems.
besides thad - i dun see much use quite yet
(i hope shareholders-chan is fine with thad >~< )
Are you having a stronk?
no dear lemm world user, no i am not having a stroke.
Fully agreed, this is a very clever post and people are just getting antsy about it because it threatens their black-and-white AI opinions
Sure, but those opinions are created by the fact that basically everyone is trying to cram ai into basically everything, regardless of suitability. Naturally there’s going to be massive backlash.
Also, note that python politely tells you in didn’t understand because it isn’t a lying machine it’s a programming language, meanwhile an LLM just makes up a lie.
Trying to cram AI into everything regardless of suitability is indeed bad. However, that does not justify black-and-white opinions. It explains them, but it doesn’t justify them.
A bubble doesn’t mean its useless though. It means shareholders think it’s much more valuable and profitable that it really is, and the value is mostly propped up by hype.
If there was a trillion dollar investment in python, that would be a bubble.
Removed by mod
Smorty just uses this communication style and they’re very nice and sweet and good.
water u talkin bout?
EIDT: i do not have schizophrenia, if thats what ur implyin…