LLMs have non-deterministic outputs, meaning you can’t exactly predict what they’ll say.
I mean…they can have non-deterministic outputs. There’s no requirement for that to be the case.
It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask “What is 1+1?”, I usually want the same answer.
I mean…they can have non-deterministic outputs. There’s no requirement for that to be the case.
It might be desirable in some situations; randomness can be a tactic to help provide variety in a conversation. But it might be very undesirable in others: no matter how many times I ask “What is 1+1?”, I usually want the same answer.