https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary
https://infosec.exchange/@malwaretech/114903901544041519
the article since there is so much confusion what we are actually talking about https://edition.cnn.com/2025/07/23/politics/fda-ai-elsa-drug-regulation-makary
Can you imagine how sad those LLMs will be if they make a mistake that winds up harming people?
About as sad as the CEO
Not at all, because they are not thinking nor feeling machines, merely algorithms that predict the likelyhood of words following other words and spit them out
More so that the equivalent human? I have to think about this: https://www.youtube.com/watch?v=sUdiafneqL8