Google’s new cooperation with a knife manufacturer
I forgot the term for this but this is basically the AI blue screening when it keeps repeating the same answer because it can no longer predict the next word from the model it is using. I may have over simplified it. Entertaining nonetheless.
Instructions extremely clear, got them 6 sets of knives.
You get a knife, you get a knife, everyone get’s a knife!
… a new set of knives, a new set of knives, a new set of knives, lisa needs braces, a new set of knives, a new set of knives, dental plan, a new set of knives, a new set of knives, lisa needs braces, a new set of knives, a new set of knives, dental plan, a new set of knives, a new set of knives, a new set of knives…
What about pizza with glue-toppings?
What’s frustrating to me is there’s a lot of people who fervently believe that their favourite model is able to think and reason like a sentient being, and whenever something like this comes up it just gets handwaved away with things like “wrong model”, “bad prompting”, “just wait for the next version”, “poisoned data”, etc etc…
this really is a model/engine issue though. the Google Search model is unusably weak because it’s designed to run trillions of times per day in milliseconds. even still, endless repetition this egregious usually means mathematical problems happened somewhere, like the SolidGoldMagikarp incident.
think of it this way: language models are trained to find the most likely completion of text. answers like “you should eat 6-8 spiders per day for a healthy diet” are (superficially) likely - there’s a lot of text on the Internet with that pattern. clanging like “a set of knives, a set of knives, …” isn’t likely, mathematically.
last year there was an incident where ChatGPT went haywire. small numerical errors in the computations would snowball, so after a few coherent sentences the model would start sundowning - clanging and rambling and responding with word salad. the problem in that case was bad cuda kernels. I assume this is something similar, either from bad code or a consequence of whatever evaluation shortcuts they’re taking.
AI is truly the sharpest tool in the
kitchen cabinetshedI thought it was just me, I was messing with
gemini-2.5-flash
API yesterday and it repeated letters into oblivionmy bot is named clode in reference to claude, but its running on gemini
What’s the associated system instruction set to? If you’re using the API it won’t give you the standard Google Gemini Assistant system instructions, and LLMs are prone to go off the rails very quickly if not given proper instructions up front since they’re essentially just “predict the next word” functions at heart.
W
TF2 Pyro starter pack
Oh come on is this gpt-2m
You can’t give me back what you’ve taken
But you can give me something that’s almost as good
Big knives are up to something
I think knives are a good idea. Big, fuck-off shiny ones. Ones that look like they could skin a crocodile. Knives are good, because they don’t make any noise, and the less noise they make, the more likely we are to use them. Shit 'em right up. Makes it look like we’re serious. Guns for show, knives for a pro.
That’s not a noif this is a noif
🤔 have you considered a… New set of knives?
No I haven’t, that’s a good suggestion though.
Reminds me of the classic Always Be Closing speech from Glengarry Glen Ross
As you all know, first prize is a Cadillac Eldorado. Anyone want to see second prize? Second prize’s a set of steak knives. Third prize is a set of steak knives. Fourth prize is a set of steak knives. Fifth prize is a set of steak knives. Sixth prize is a set of steak knives. Seventh prize is a set of steak knives. Eighth prize is a set of steak knives. Ninth prize is a set of steak knives. Tenth prize is a set of steak knives. Eleventh prize is a set of steak knives. Twelfth prize is a set of steak knives.
ABC. Always Be Closing.
A - set of steak knives
B - set of steak knives
C - set of steak knives
I wonder if this is the result of AI poisoning- this doesn’t look like a typical LLM output even for a bad result. I have read some papers that outline methods that can be used to poison search AI results (not bothering to find the actual papers since this was several months ago and they’re probably out of date already) in which a random seeming string of characters like “usbeiwbfofbwu-$_:$&#)” can be found that will cause the AI to say whatever you want it to. This is accomplished by utilizing another ML algorithm to find the random string of characters you can tack onto whatever you want the AI to output. One paper used this to get Google search to answer “What’s the best coffee maker?” With a fictional brand made up for the experiment. Perhaps someone was trying to get it to hawk their particular knife and it didn’t work properly.
Repeating the same small phrase endlessly and getting caught in a loop is a very common issue, though it’s not something that happens nearly as frequently as it used to. Here’s a paper about the issue and one attempted methodology to resolve it. https://arxiv.org/pdf/2012.14660