It’s probably gonna be a complex model that uses modules like LLMs to fulfill a compound task.
That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn’t going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
We know that it can output code, which means we have a quantifiable metric to make it better at coding
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
It’s not if we’re going to get a decent coding AI, it’s when.
LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
So closer to average human intelligence than it would appear. I don’t know why people keep insisting that confidently making things up and repeating things blindly is somehow distinct from the average human intelligence.
But more seriously, this whole mindset is based on a stagnation in development that I’m just not seeing. I think it was Stanford recently released a paper on a new architecture they developed that has serious promise.
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
I think you misunderstand me. The metric is the code. We can look at the code, see what kind of mistakes it’s making, and then alter the model to try to be better. That is an iterative process.
The year 30,000 AD doesn’t count.
Sure. Maybe it’s 30,000AD. Maybe it’s next month. We don’t know when the breakthrough that kicks off massive improvement is going to hit, or even what it will be. Every new development could be the big one.
So closer to average human intelligence than it would appear
No, zero intelligence.
It’s like how people are fooled by optical illusions. It doesn’t mean optical illusions are smart, it just means that they tickle a part of the brain that sees patterns.
a paper on a new architecture they developed that has serious promise
Oooh, a new architecture and serious promise? Wow! You should invest!
The metric is the code. We can look at the code, see what kind of mistakes it’s making
No, we can’t. That’s the whole point. If that were possible, then companies could objectively determine who their best programmers were, and that’s a holy grail they’ve been chasing for decades. It’s just not possible.
and then alter the model to try to be better
Nobody knows how to alter the model to try to be better. That’s why multi-billion dollar companies are releasing new models that are worse than their previous models.
Maybe it’s next month
It’s definitely not next month, or next year, or next century. Nobody has any idea how to get to actual intelligence, and despite the hype progress is as slow as ever.
That sounds very hand-wavey. But, even the presence of LLMs in the mix suggests it isn’t going to be very good at whatever it does, because LLMs are designed to fool humans into thinking something is realistic rather than actually doing something useful.
How so? Project managers have been working for decades to quantify code, and haven’t managed to make any progress at it.
The year 30,000 AD doesn’t count.
So closer to average human intelligence than it would appear. I don’t know why people keep insisting that confidently making things up and repeating things blindly is somehow distinct from the average human intelligence.
But more seriously, this whole mindset is based on a stagnation in development that I’m just not seeing. I think it was Stanford recently released a paper on a new architecture they developed that has serious promise.
I think you misunderstand me. The metric is the code. We can look at the code, see what kind of mistakes it’s making, and then alter the model to try to be better. That is an iterative process.
Sure. Maybe it’s 30,000AD. Maybe it’s next month. We don’t know when the breakthrough that kicks off massive improvement is going to hit, or even what it will be. Every new development could be the big one.
No, zero intelligence.
It’s like how people are fooled by optical illusions. It doesn’t mean optical illusions are smart, it just means that they tickle a part of the brain that sees patterns.
Oooh, a new architecture and serious promise? Wow! You should invest!
No, we can’t. That’s the whole point. If that were possible, then companies could objectively determine who their best programmers were, and that’s a holy grail they’ve been chasing for decades. It’s just not possible.
Nobody knows how to alter the model to try to be better. That’s why multi-billion dollar companies are releasing new models that are worse than their previous models.
It’s definitely not next month, or next year, or next century. Nobody has any idea how to get to actual intelligence, and despite the hype progress is as slow as ever.
Keep drinking that kool-aid.