The notion of object permanence, “the understanding that objects continue to exist even when they cannot be seen”, is considered an important developmental step in young children, and often used as an indicator of advanced mental capabilities in animals. But could permanence play a much more fundamental role in how intelligence forms? Could it in fact hold the key to how we learn to understand the world around us?
In this series, I explore how striving for “permanence” in a world model can give rise to recognizing structure in the world we live in. I propose the notion of a permanence prior, and motivate how it could improve the data efficiency and generalization abilities of models trained in a self-supervised fashion. Finally, I propose a new neural network architecture – called PtolemyNet – that combines this permanence prior with a learned notion of deep, locality-aware concept spaces in an attempt to learn sample-efficient, generalizing world models.

Around 150 AD, Greek mathematician and astronomer Claudius Ptolemy derived a detailed model of the night sky and our solar system. Ptolemy deduced that earth itself would be stationary, while other observable astronomic objects – the sun, the moon, planets, the stars – move around earth in predictable, albeit complex patterns. While astronomers have since found more elegant and precise models of our universe that no longer assume earth itself to be stationary, Ptolemy’s approach illustrates just how much we are wired to view our surroundings as stable and unchanging.
In this series
- Priors and Invariants, a Primer
- Permanence – One Prior to Rule Them All?
- [coming soon] Learning Continuous Concept Spaces
- [coming soon] Instantiation and Feature Binding in Neural Networks
- [coming soon] Deeply Meaningful Representations with PtolemyNet
- [coming soon] Towards Predictive, Generalizing World Models

Leave a comment