Connect with us

Why AI Still Can’t Tell Time — And Why That Matters More Than You Think?

Why AI Still Can’t Tell Time — And Why That Matters More Than You Think?

Credit: Shutterstock

A surprising new study reveals that even the smartest AI models struggle with simple tasks like reading clocks and understanding calendar dates.

Artificial intelligence has come a long way — it can write poems, draw realistic portraits, ace academic tests, and even crack jokes (sometimes). But ask it to read the time on an old-fashioned clock or figure out what day your birthday lands on next year? Suddenly, it’s stumped.

A recent study, presented at the 2025 International Conference on Learning Representations (ICLR), has revealed just how baffled today’s most advanced AI models are when it comes to two surprisingly simple tasks: telling time and reading calendars.

Despite their futuristic capabilities, these models — including Meta’s LLaMA 3.2-Vision, Google’s Gemini 2.0, Anthropic’s Claude 3.5 Sonnet, and OpenAI’s GPT-4o — struggle with the kind of everyday reasoning that most humans learn in elementary school.

AI’s Not-So-Timely Problem

The researchers behind this study, led by Rohit Saxena from the University of Edinburgh, wanted to see how well these multimodal large language models (MLLMs) could handle visual and logical tasks that involve time. So, they created a custom set of images showing analog clocks and calendar queries — the kind of stuff that’s second nature to most of us.

But the results were, in a word, underwhelming.

On average, the models could correctly read the time from a clock image only 38.7% of the time. When asked to calculate the day of the week for a specific calendar date — for example, “What day will the 153rd day of the year be?” — they performed even worse, with a success rate of just 26.3%.

That’s not just a fluke — it’s a major gap in AI reasoning.

Why AI Trips Over Clocks and Calendars?

The root of the problem lies in how these models are trained. Most AI systems today learn by analyzing massive amounts of data and identifying patterns. They’re incredibly good at mimicking what they’ve seen before. But certain tasks — like interpreting the position of clock hands or calculating calendar dates — involve spatial reasoning and rule-based logic, not just pattern recognition.

“Recognizing that something is a clock is fairly easy for AI,” Saxena explained. “But understanding what time it shows requires detecting angles, dealing with overlapping hands, and interpreting a variety of dial styles — from Roman numerals to minimalist designs. That’s a different skill entirely.”

And when it comes to calendars, the issue gets even trickier. While AI may have seen countless examples of what a leap year is, it still might not understand how to apply that information to calculate the 153rd day of the year or deal with exceptions in the Gregorian calendar.

Not a Math Machine After All

What’s particularly surprising to many is that AI fails at something that seems like basic arithmetic. But here’s the catch: modern language models don’t “do math” in the way traditional computers do. They don’t run actual calculations — they predict answers based on the patterns they’ve seen during training. If they haven’t seen enough similar examples, they’re likely to guess — and guess wrong.

This is one more example of a growing realization: AI doesn’t “understand” the world the way humans do. It mirrors it.

What This Means for the Future of AI?

These findings serve as a wake-up call for how we use — and trust — AI in real-world applications. From smart scheduling assistants to autonomous robots, any system that needs to blend visual perception with precise logic can’t afford to get these basics wrong.

It also points to an important need for change: better training data, more robust testing, and in many cases, keeping a human in the loop — especially when things like timing, dates, and logistics are on the line.

“AI is powerful,” Saxena concluded, “but when tasks mix perception with precise reasoning, we still need rigorous testing, fallback logic, and — often — a human safety net.”

So, next time you glance at a clock or plan your weekend, take a moment to appreciate that some skills still make us pretty special. Even in a world of super-smart machines, there are some things they just can’t quite grasp. Yet.