This seems like a dumb benchmark.
ClockBench evaluates whether models can read analog clocks - a task that is trivial for humans, but current frontier models struggle with.
What do you mean trivial? Most humans I know can’t read the most basic white-background-big-black-numbers clocks.
Someone rigged the jury to get 90% on this:
Rather, ClockBench will end up improving AI in this regard over the next few years. This is because they need any AI benchmark to identify its strengths and weaknesses in order to improve it in future versions.
The human level accuracy is less than 90%!?
Some of those don’t have tick marks. I hate clocks like that, they’re difficult to read.
I’m surprised it’s near 90, a while generation has grown up with digital clocks everywhere
Have a look at the clock faces there using to Benchmark and it’ll make more sense.
Really wish they published the whole dataset. They don’t specify on the page or in the paper what the full set was like, and the GitHub repo only has one of the easy-to-read ones. If >=10% of the set is comprised of clock faces designed not to be readable then fair enough.
So LLMs operate like blind people - like every other web scraper and chatbot to exist.
we need a human bench for how many people can read the room