I do not believe that LLMs will ever be able to replace humans in tasks designed for humans. The reason is that human tasks require tacit knowledge (=job experience) and that stuff is not written down in training material.
However, we will start to have tasks for LLMs pretty soon. It was already observed that LLMs work better on stuff produced by other LLMs.
To be fair, not all knowledge of LLM comes from training material. The other way is to provide context to instructions.
I can imagine someone someday develops a decent way for LLMs to write down their mistakes in database and some clever way to recall most relevant memories when needed.
I do not believe that LLMs will ever be able to replace humans in tasks designed for humans. The reason is that human tasks require tacit knowledge (=job experience) and that stuff is not written down in training material.
However, we will start to have tasks for LLMs pretty soon. It was already observed that LLMs work better on stuff produced by other LLMs.
To be fair, not all knowledge of LLM comes from training material. The other way is to provide context to instructions.
I can imagine someone someday develops a decent way for LLMs to write down their mistakes in database and some clever way to recall most relevant memories when needed.
You sort of described RAG. It can improve alignment, but the training is hard to overcome.
See Grok that bounces from “woke” results to “full nazi” without hitting the mid point desired by Musk.
there are already existing approaches tackling this problem https://github.com/MemTensor/MemOS