Does a Dog Catching a Frisbee Understand Physics? A Journey into Understanding
I was at the park over the weekend, watching a dog chase a Frisbee with incredible precision. The owner tossed it high, and the dog sprinted across the grass, leaping at just the right moment to snatch it mid-air.
I couldn’t help but wonder: does that dog understand the physics of the throw, the arc, the spin, the gravity, or is this all just instinct at work? That moment sparked a deeper reflection on what “understanding” really means, not just for dogs, but for AI and even ourselves.
Let’s dig into this… When the human threw the Frisbee, the dog seemed to know exactly where it would land, darting to the spot with perfect timing. A 2003 Oxford Scholarship study offers some insight: dogs use something called a linear optical trajectory, tracking the Frisbee’s movement against the background at a constant speed to intercept it. It’s a mental shortcut, not a theoretical grasp of physics. That dog isn’t calculating aerodynamics or gravity; it’s just relying on a practical trick that works. So, does that count as understanding?
This question sent me down a rabbit hole, leading to a broader debate about predictive power versus explanatory understanding. In the world of AI, this tension feels especially relevant. I recently came across a perspective from Ilya Sutskever, shared in a 2023 Ted Talk, where he argued that high predictive accuracy equates to understanding. Think about how large language models predict the next word in a sentence; they’re scarily good at it. But is that true understanding, or just pattern recognition? On the other hand, some statistical models can explain phenomena without being predictive at all, challenging the idea that prediction alone tells the whole story. Just like the dog can predict the Frisbee’s path but can’t explain why it falls, an AI might nail a forecast without grasping the underlying “why.”
What really caught my attention was the concept of “world models” in AI, a hot topic in 2025 research. These models aim to go beyond mere prediction, learning through observation and reasoning to mimic human-like thinking. Imagine an AI that doesn’t just guess the next word but understands the context of a conversation the way we do. Yet, even these advanced systems struggle to balance prediction with true explanatory depth. It’s a reminder that understanding might not be a single, perfect ideal; maybe it’s a spectrum, stretching from instinctive prediction to deep, explanatory insight.
As I watched that dog at the park, tail wagging, proudly trotting back with the Frisbee, I couldn’t help but wonder: whether it’s a pup catching a toy or an AI generating text, perhaps the real question isn’t whether they understand physics or language in the way we do. Maybe it’s about how we define understanding itself. Is predictive power enough to call something intelligent, or are we missing a deeper layer?