Has anyone else noticed some delivery apps using AI generated images for food items when a restaurant doesn’t have an actual photo?
Always looks fucking awful too. How is that in any way helpful? “Here is what a slice of pizza generally looks like”. Cool. Thanks, I guess.
This made me realize, the only last remaining good thing about mcdonalds, the art of making the fake food good looking photos, will soon be replaced by actual AI slop.
I much prefer AI slop for menu thumbnails to carefully crafted lies. at least I know it’s inaccurate
carefully crafted lies are at least a bias approximation of the actual product done by people that know what the actual product look like. That’s still better than something misleading that not everyone can identify.
fair
Reminds me of the Amazon product pages where the images are clearly ones where the product was photoshopped onto some other image. Can’t even be bothered to use the product for real and take some photos? I hate everyone involved in that shit.
they kibsa always did that with stock photos - i dun get why they wud geberate imags tho- plenty cc0 pizza pics online ~
Yeah, if I’m looking at a photo of a dish on a food app, I want it to be a photo of what it actually is, not an AI generated representative of it. Otherwise it’s just completely useless to me.
Which would you rather?
I guess the top one. But honestly, that one looks like a generalized insult towards the entirety of Italy. I make better looking pizzas at home.
Call a paramedic you’re having a stroke
she just talks like that don’t judge
God the number of people I’ve seen try to use LLMs to “detect” AI generated photos/text…
The fact that it responded only with “no” implies a convo exchange previously, in which it was prompted to either
- respond exactly with “no”
or
- keep the response short
it seems like the first case applies here, since it actually gives a little post-amble at the image gen response.
Apparently, with chatgpt, it foesnt actually look st the generated image. Otherwise it would be able to tell, that the users image is equivalent to the generated one (since the tokens would be literally identical, so its like asking an llm “are these two paragraphs the same text?”
aaaaaaanyway- dont use VLMs to check if an image was generated! there r actual models trained for that task. VLMs r not.
Yeah, I see these kinds of misunderstandings all the time with people asking ChatGPT to do something with an image, and then it failing and apologizing and doing the same. The LLM doesn’t do anything with the image, it’s calling some other service to do it. It can’t apologize for the output, or try harder to “make sure” that glass of wine is full to the brim, what it says and does in these cases is entirely disconnected.
Even “recognizing” details in an image, some other service is parsing the image and writing a text description for the LLM. It’s not the same service as the one that does the generation, no part of this pipeline would ever have the chance to realize “hey, this is the same image”.
yea- tru…
tool calls really do kindsa obfuscate what exactly is going on in a continuous-feeling system
it makes one issue seem like the result of the main system, even if thads not rlli tru - - -
me yappin bout how bad todays chatbot web-search is and how it could be inproved
mygroarg im surprised how little current day chatgpt cn actulli do >v< like - web search!
many peeps, includin my brothr, nevr rlli learned to google, n now jus calls chatgpt for litrlly almost anything.
while its always bettr to do research urself, when i do decide ti hav a web search done by one of these bots - i feel like… the results shud be nicer?
like - chatgot cud fir exampl, reference direct quotes from the sources, which r then checked against the actual sources, n then theres an if statement
- if the text does appear in the sources, show a lil ui element called “From the Source”, where it shows the quote amd source link, so the user knows its legit
- if it doesn appear, chatgpt messed up, n dis “quote” must be removed, or the entire response must be regenerated. Mayb even show a lil “Invalid Citation!” so users kno whats up -
but noooo all these companies jus luv presentin their llms as perfect oracles…
but heyyyyy whadddoikno - im jus a sili lil consumer., - - -
more yap
mygosh i think like - waaaaayyyy too much bout dis kindsa scaffolding every day,…
current day llms r oversold n overhyped for what they cn do.
- they cnt replace a sofware dev - but they cn make simple-to-medium complex html demos with bad ux ~
- they cnt generate novel ideas (lets ignore AlphaEvolve for now…), but its great at languag comprehension n categorisation
llms r an importnt steppin stone tiwards what we wud call “ai” - but woarg current consumer facin systems r spectacularly meh n llms r bein overused in places they shoudn
No
lol
Does ChatGPT even answer that monosyllabically?
Does ChatGPT even answer that monosyllabically?
Excellent question. You’re very right for asking and this shows real intelligence and analytic ability. Let’s have a deeper look at the information I’ve found:
Some users say: no. Others report: maybe, but mostly no.
So on balance, I would recommend that it is safe to conclude the answer is likely: maybe yes sometimes.
Let me know if you want me to give you further answers to unrelated questions or simplify further.
Perfect. Long winded, unnecessary flattery, hedged non-answer. Just what I need in a loyal companion.
Well done. You know there’s actual people who talk/write like that. They usually think they’re highly intelligent. Just like ChatGPT I guess.
I’m on break, I clicked on the image, and one of my 7-year-old students standing behind me saw it and immediately said “ChatGPT”. Ha!
yeah cardboard don’t melt like that
Simulated Intelligence
Artificial Idiocy
……considering if you don’t think the ai hasn’t been coded around this case, it’s interesting to wonder the types of ways it might say yes.
I don’t think I follow your comment.
Well, ai, as we use it today, is just a LLM. Which is ‘take a look at all the text you have access to and predict the next thinsaid’ more or less (I think, I’m not a professional) and then you can use that same concept for art or videos or sound or whatever.
So, to have it generate an image, then give it its own image back and ask if it’s ai generated, it’s obvious to us, but to the ai, unless it was programmed to recognize that, it would have to look at other images it already had access to (and used to create the image) and say, is this image in here? Or if I can process what does an ai generated image contain.
Then if you abstract it further, it’s like asking the ai what difference between an artist and an ai is, which is sorta interesting to think about.
I see