← back

The Systematicity of Thought (ㅅ´˘`)

9 min read ·

What is the systematicity of thought? How does classicism account for this property of cognition? Can connectionism explain the systematicity of thought?


Introduction:

The study of cognition is at the centre of cognitive science, an area that attempts to comprehend the complexities of the mind and its processes. The concept of systematicity is critical to this attempt, as it refers to the structured and rule-governed form of cognition that supports human cognitive powers. Systematicity is essential for understanding why and how humans may develop a diverse range of interconnected and rationally structured thoughts. This property is not limited to academic interest; its implications span across disciplines like neuroscience, which seeks to understand the biological basis of human cognitive abilities, and artificial intelligence, where comprehending the architecture of human thought is essential to building more complex and human-like systems. This essay will go into the concept of systematicity within these two paradigms, comparing their strengths and flaws in accounting for thought’s structure.


Classicism:

In cognitive science, classicism proposes a picture of the mind, in which thoughts and cognitive processes emerge through the manipulation of symbols in accordance with predetermined rules. This theory, known for its predictive success and explanatory depth across a wide range of domains, from verbal comprehension to sophisticated problem solving, captures the essence of human cognition through symbolic logic and computational algorithms. At the heart of this is John Haugeland, who argued that mental activity like thinking and understanding can be viewed as computer processes itself (Haugeland, J. 1985). This was a basis for cognitive science with its claim, based on the case presented through him, that thoughts are symbol manipulations, similar to computers. Constructed by professionals in the early years of cognitive science, it portrayed a systematic frame of thought mechanisms like those outlined in a rational approach to computer science and opened new ways towards framing how cognition may be formalised.

Building on these foundational ideas, Daniel Dennett introduces a nuanced expansion, focusing on the inherent flexibility and richness that symbolic processing affords. Through his analysis, Dennett reveals how this approach does not simply simulate mechanical computation but captures the essence of human cognition’s versatility. Moreover Dennett, illustrates this through the domain of language, where the application of grammatical rules to symbols generates the boundless diversity of human communication (Dennett, D. C. 1987). This highlights a key strength of the symbolic approach, its capacity to model not just the structural aspects of thought but it’s dynamic and creative essence. Classicism’s success in modelling cognitive processes is exemplified in the domain of artificial intelligence, were expert systems and natural language processing algorithms embody its principles (Bechtel, W. 1997).

Furthermore, Elman et al.’s (1996) work modelled a developmental sequence of linguistic learning using recurrent neural networks, a notion from connectionism but concerning classicist symbolic manipulation, showing that adaptive systems could capture systematicity through pattern recognition and generalisation. Such models allow for a capture into the future whereby classicism can be applied for complex problem-solving activities (Elman, J. 1996). However, the narrative around classicism and its depiction of the mind as a computational system is met with alternative perspectives, notably from Paul Churchland. Churchland ignores the premises of classicism and its limitations rather than accepting them uncritically. Churchland highlights this complexity in human thinking, in a way that goes beyond symbolic logic confines by describing cognitive processes as being nuanced and context based (Churchland, P. S. 1993). This view challenges not just the model’s explanatory capability but also reflects recent debates within cognitive science regarding accurate portrayals of mental operations.


Connectionism:

The logical and algorithmic character of human cognition is well captured by classicism, which provides a strong foundation for comprehending the mind. It highlights a methodical, rule-based method for dealing with cognition that is similar to computational procedures. However, given the nature of the human brain and its capacity for invention and adaptation, it is possible that the systematicity of thought is more than just symbols and guidelines. This is where connectionist models, a convincing alternative of cognitive processes, are suggested. Connectionist models are distinct from classical models, instead of depending exclusively on discrete symbol manipulation, they are inspired by the brain’s organic networks. They suggest that thinking is the result of intricate, dynamic interactions between a wide variety of processing units, or neurons. John Tienson is at the forefront of articulating this viewpoint, delving into the mechanics of connectionist models, and explaining how these systems imitate the neuronal processes that underpin human cognition. Tienson clarifies the idea that dynamic patterns of activity throughout brain networks give rise to cognitive experiences rather than the manipulation of discrete symbols. According to this concept, decentralised, parallel processing in the brain is mirrored in the way that ideas, memories, and even consciousness itself are regarded as the result of intricate interactions among innumerable small units (Tienson, J. 1987). Since it suggests a more adaptable and flexible brain architecture that inherently promotes learning as a gradual, experience-driven process, this hypothesis has far-reaching implications. Furthermore, Andy Clark explores the strengths of connectionism, emphasising its capacity to model learning and adaptation.

Clark presents examples of how connectionist networks, through processes like neural plasticity, can adapt to new information and learn from environmental feedback (Clark, A. 2001). An illustrative example would surround how the networks can be trained to recognise patterns or solve problems without explicit programming, effectively learning in a manner analogous to human cognitive development. Such examples highlight connectionism’s potential to bridge the gap between computational models and the biological realities of human cognition, offering a robust framework for understanding phenomena like language acquisition and memory formation. However, connectionism is not without its critics, the development of the expert system MYCIN in the 1980s demonstrated the application of rule-based logic to medical diagnosis, embodying classicism’s principles of knowledge representation and reasoning. Despite its success, MYCIN also highlighted classicism’s limitations in flexibility and learning, particularly when dealing with novel scenarios beyond its programmed rules. This classicist framework faced further scrutiny with Dennett’s examination of the frame problem in artificial intelligence, pointing to the challenges of applying rule-based logic to the multifaceted complexity of real-world interactions.

These points validate the limitations that connectionism seeks to address, particularly its emphasis on learning and adaptation through emergent neural network behaviours. In his analysis, Daniel Dennett draws attention to some of the connectionist approach’s possible drawbacks, particularly regarding its capacity to explain higher-order cognitive processes. Dennett contends that whereas connectionist models are quite good at modelling specific forms of learning and pattern identification, they would have trouble reproducing the ordered, rule-governed parts of mind that make up human cognition (Dennett, D. C. 1987). For example, the difficulty of describing language’s syntactic structures or the logical processes that underlie reasoning raises the possibility that connectionist networks, in their conventional configuration, may not have the architecture required to fully capture these processes’ complexity. This criticism calls for a re-evaluation of connectionism’s parameters as well as the investigation of hybrid models that combine the best features of classicist and connectionist methods to explain the intricacies of human cognition.


Considering the viewpoints:

The conversation in cognitive science shifts to creative integration models that aim to capitalise on the advantages of both classicism and connectionism to reconcile their opposing points of view. With the help of adaptive models supported by Clark (2001) and Haugeland’s (1985) fundamental discoveries, hybrid cognitive architectures present a promising new route. The goal of these frameworks is to combine the emergent, dynamic qualities of connectionist networks with the symbolic, rule-based processing inherent in classicism. Combining these methods could result in a more complete model of cognition that accounts for both the flexible aspects of cognitive processes and the structured nature of thought. The exploration of such integrative models raises the pivotal question, can connectionism, when combined with elements of classicism, adequately explain the systematicity of thought? The systematicity of thought, a fundamental feature of human cognition, refers to our ability to generate a coherent and structured array of related thoughts from underlying cognitive processes.

Classicism has traditionally been favoured for its clear explanation of this feature, attributing systematicity to the manipulation of discrete symbols according to formal rules. Despite this, I firmly believe the emergence of hybrid models suggests a pathway for connectionism to contribute to this explanation. By integrating symbolic structures within connectionist networks, these models can leverage the pattern-recognition and learning capabilities of neural-like systems while also accommodating the rule-governed aspects of systematic thought. Such integrated methods have several potential advantages. By incorporating both learned patterns and explicit rule-based reasoning, these adaptations could improve the reliability of artificial intelligence systems and offer more nuanced models of cognitive processes that reflect the flexibility and specificity of human thought. They can also offer a richer framework for understanding how cognitive processes develop and function within the brain. However, there are a lot of obstacles to overcome in the creation and use of these hybrid models. They ask for addressing essentially divergent theories on the structure of cognition, creating methods for combining symbolic and sub-symbolic processes, and doing empirical research to confirm how well these models represent the intricacy of human cognition.


Conclusion:

The exploration of systematicity within the paradigms of classicism and connectionism reveals the multifaceted nature of cognitive science’s endeavour to understand human thought. While classicism offers a clear, rule-based model of cognition that aligns with computational and logical frameworks, connectionism provides a biologically inspired perspective that emphasises learning, adaptability, and emergence. The ongoing dialogue between these perspectives, enriched by the contributions of mentioned philosophers, highlights the complexity of modelling cognition and the potential for a more integrated approach that captures the depth of human thought.


References:

  1. Bechtel, W. (1997). Directions in connectionist research: Tractable computations without syntactically structured representations. Metaphilosophy, vol. 28, pp. 31–62.
  2. Churchland, P. S. (1993). The co-evolutionary research ideology. In A. Goldman (Ed.), Readings in Philosophy and Cognitive Science. MIT Press.
  3. Clark, A. (2001). Mindware. Oxford University Press.
  4. Dennett, D. C. (1987). Cognitive wheels: The frame problem of AI. In Z. Pylyshyn (Ed.), The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. MIT Press.
  5. Elman, J. (1996). Rethinking Innateness: A Connectionist Perspective on Development. Neural Network Modeling and Connectionism series. Cambridge, MA: MIT Press.
  6. Haugeland, J. (1985). Artificial Intelligence: The Very Idea. MIT Press.
  7. Tienson, J. (1987). An introduction to connectionism. The Southern Journal of Philosophy, 26 (Supplement), 1-16, of the mind’s architecture.