All theories and causal beliefs eventually “run dry” in their ability to explain feature patterns; when they do, these more associative aspects of concept representations take over. Of course, these associative relations must always be constrained so that not all logically possible associations are made. For Quine, the young child’s ”animal sense of similarity” and a general set of associative relations provide adequate constraints for the development of all later theory. The following chapters will empirically assess whether there is such a shift from a purely atheoretical “original sim,” to a causally interpreted, theoretically coherent conceptual structure. -Frank Keil (1992)
At some level, it shouldn’t be too surprising that a theory does not provide a full explanation of the natural world. Theories are often abstract, and require significant work to specify, implement and make predictions. Why keep theories abstract? Abstraction allows us to generalize our theories to new situations, explaining data beyond the contexts we have already seen. For example, our theory of the solar system and planetary orbits directly informed our early theories of atoms. The common problem with abstract theories, however, is that non-trivial information may be lost in the abstraction, which can lead to unclear or bad predictions.
In this quote, Frank Keil is actually referring to a specific kind of a theory, an intuitive theory or our folk theories. These theories are our personal understandings of the relationship and structures in the world. For example, over the lifespan, you expect your personal understanding of the weather to change from appeals to seasons to interactions between air fronts, geographic features and the rotation and orbit of the planet, etc. Frank’s book is a deep dive into how our concepts of natural kinds vs human-made artifacts change across development. In this case, children are frequently presented with new data that undermines their current theory of, e.g., what it means to be an animal. Yet we have little problem generalizing properties (e.g., breathes) to new animals (e.g., emu) without knowing how they incorporate into our folk theory.
In the absence of a structured theory, it has been proposed that we use our judgments of similarity to generalize. A unicorn is similar to a pegasus; so if unicorns breathe, pegasi breathe. Admittedly, there are problems with such an approach. For example, an elder berry is similar to a hemlock berry, but hemlock berries are poisonous while elderberries can be consumed. That being said, our judgements of similarity have been shown to sometimes reflect the structure of the world, suggesting that the ability to compute similarity may be sufficient to capture rich, structured theoretical knowledge.
Now computing similarities, while often trivial for a human, is an extremely difficult task to model. There have been many, many, many experimental, theoretical and formal investigations into characterizing and explaining just how we compute similarity. This is far beyond what we have time for at the moment. Suffice to say that an early, influential idea was that similarity is computed via co-occurrence statistics between exemplars and their featural properties. Such associated learning mechanisms are well evidenced behaviorally and plausible neural mechanics have been identified. Importantly, associative learning mechanisms make relatively few, if any, assumptions about the nature of mental representations, making it a fairly parsimonious theory.
It’s under this appeal to parsimony that Connectionism was formalized as cognitive framework. Connectionist models impose two architectural constraints, inspired by neuroscience, on theories:
The Connectionist endeavor is then to see which problems can be solved when formalized under these constraints and how the problems need to be specified. In the process, we learn constraints on representations and make predictions for how these constraints influence behavior.