Reasoning Algorithms Across Species, Diagnoses, and Development: Theoretical Frameworks Informing Causal Manipulations: Workshop Summary

Date: April 23, 2025, 11:00 a.m.–5:00 p.m. ET
Location: Virtual

Key Highlights and Action Items

Reasoning algorithms are neural activity patterns and pathways that manipulate information to extract new knowledge. This workshop focused on understanding reasoning processes across species and developmental stages to identify how brain networks logically process different types of reasoning and explored how to bridge animal, human, and artificial intelligence (AI) models of reasoning.

Session 1: Cross-Species and Developmental Insights into Reasoning

  • Relationships between distinct mental representations underlie higher-level cognitive abilities, and the capacity for relational reasoning is a critical component of fluid intelligence.
  • Humans’ reasoning abilities are an evolutionary adaptation to the environment, as evidenced by a focus on the number of objects in the physical world as a basic unit, rather than other parameters such as their surface area. Cross-species comparative studies demonstrate that non-human animals similarly recognize causal connections among objects in the physical world as well as the social world of agents and intentions.
  • Reasoning requires working memory and attentional control to switch between internal models and external information. Adaptive cognitive mechanisms, including curiosity-driven exploration, facilitate information gathering and integration into coherent mental models.
  • Diverse species across phylogenetic lines can judge numerical quantities, suggesting that this kind of reasoning is evolutionarily conserved. Reasoning is constrained by its evolutionary design for a world of discrete material objects, its inherent parsimony bias, and its limits on cognitive capacity.
  • Nonhuman primates and young children have been shown to hold abstract ideas about objects and the logical relationships between them. Humans exhibit a preference for hierarchical pattern processing, contrasting with a tendency in monkeys toward flat ordinal processing, a difference significantly modulated by working memory capacity. Quantifiable developmental and species differences exist in task capability depending on the level of working memory capacity and attentional control required.
  • Children seem able to learn both object and abstract relational rules and to apply them flexibly depending on learning context. Capabilities in this area are demonstrably influenced by language, culture, and level of schooling. Relational reasoning (e.g., same-different conceptualization) can follow a U-shaped developmental trajectory in some cultures (like the United States), potentially due to learned object biases from language, contrasting with more linear development in other countries.
  • The nature of an individual’s experiences leads them to privilege certain solutions over others when reasoning. Such learned biases help an individual focus on a particular set of potential solutions for problems rather than an infinite set, but overcoming maladaptive biases requires more cognitive effort.
  • The capacity for representing abstract relationships is present in children from the youngest ages, but differences in reasoning across individuals and populations are influenced by a combination of factors including selective attention allocation mechanisms, cultural context, language, schooling, and capacity limits on working memory and attention. Although targeted cognitive interventions can override some biases, certain changes are limited by anatomical and functional constraints in parts of the brain that require maturation to fully support higher-level thinking.
  • Semantic representations built throughout early childhood provide the necessary foundation for relational reasoning.
  • Primates build their understanding of relationships by exploring space and objects, thus creating foundational patterns that underlie information ordering. In contrast, AI models of reasoning build relationships indirectly by absorbing language rather than having direct multisensory exposure to the causal logic of the world, limiting their ability to develop grounded conceptual understanding.
  • Context serves as a constraint on generalizations and changes the likelihood of each potential solution, allowing an individual the flexibility to pay attention to objects or abstract relations as needed.

Session 2: Neurobiological Foundations of Reasoning

  • The psychological ability to infer transitive relationships is a fundamental form of reasoning and useful for understanding hierarchical organization; for example, transitive inference is thought to be important in stabilizing social hierarchies while reducing conflicts.
  • Generalization (i.e., recognizing similarities across different items) and differentiation (i.e., distinguishing between items) are fundamental, complementary reasoning processes that can be affected by numerous disorders. Neural representation for simultaneous generalization and differentiation can be achieved by superimposing distinct population-level coding schemes: colinear codes, which support generalization by responding similarly to items within a category, and orthogonal codes, which enable differentiation by providing unique neural responses for individual items. This combination effectively creates a flexible, semi-orthogonal neural code. The dorsal anterior cingulate cortex generates a "learnability signal," crucial for distinguishing structured, learnable environments from random ones, thereby guiding the strategic allocation of cognitive resources.
  • Value is a component of generalization and differentiation, allowing individuals to compare dissimilar goods through common utility metrics.
  • Learned information can be stabilized by strengthening the synaptic connections between neurons that support particular associations. Memory consolidation involves reorganizing learned information, creating new information or knowledge that supports adaptive behavior. One example of this is using a learned spatial map to infer a novel shortcut, such as when a barrier is removed. The downside of these shortcuts is their potential to create a map that is distorted relative to the external world.
  • When information is learned under conditions of elevated noradrenaline, a neuromodulator that can be associated with stress, associations are spread further on the neural map, facilitating the creation of new knowledge beyond direct observation but leading to more overgeneralization errors. Rest periods, particularly via hippocampal sharp-wave ripples, are critical not just for consolidating experienced events but also for the generative replay of inferred and potentially novel relationships, contributing to building predictive cortical models.
  • Cognitive control, or selecting actions based on plans, is based on internal logical reasoning sequences. AI models have shown poor abilities in this area compared to biological systems, suggesting fundamental differences in functional architecture.
  • Following established sequences does not require complex decision-making and can be modulated by changes in reward. Disorders in this type of processing often lead to perseverative behavior patterns that impede sequence completion.
  • Comprehensive large data sets are needed to incorporate neurodiversity and the effects of various disorders into how these processes are understood. Mental structures that are adaptive for many people may be maladaptive to those with differences in processing pathways.
  • Post-traumatic stress disorder may be conceptualized as an overgeneralization effect in which a contextual cue associated with a negative experience triggers the recall of negative memories through hyperactive pattern completion mechanisms.
  • Many types of spatial maps exist in the brain, encompassing many ways of representing information, and all are tied to abstract processes at varying degrees of hierarchical organization.

Session 3: Computational Models of Reasoning

  • Reasoning and intelligence can be explored through computation by unifying symbolic language models, probabilistic models, and neural networks. Symbolic models support structured representations and logic-based inference, and probabilistic models capture uncertainty and decision-making under ambiguity. Neural networks excel in pattern recognition and learning from data but currently possess limited explicit representations of reasoning processes. Probabilistic programming, in particular, provides a powerful framework for quantitatively modeling nuanced human reasoning, such as intuitive physics and theory of mind, by enabling the simulation of underlying mental computations and inferences from sparse data.
  • Theory of mind captures how humans infer others’ beliefs, intentions, and desires. This can be effectively modeled through Bayesian inverse planning, aligning well with adult human inferences and providing a robust framework to explore potential developmental and comparative aspects, suggesting the evolutionary roots of probabilistic reasoning in social contexts.
  • The brain automatically sorts information into an ever-growing list of categories and must sort each new event, which will never be exactly like a previous event. Latent cause inferences allow people to generalize learning within a latent cause while avoiding interference across latent causes by creating boundaries in learning processes.
  • Measuring individual differences in latent cause inference with computational models provides a mechanistic framework to explain why psychotherapy works for some individuals and not others based on identifiable computational phenotypes. For instance, individual variability in parameters such as beliefs about environmental predictability (stochasticity) and the propensity for selective replay of past (especially aversive) experiences offers a mechanistic account for differing psychotherapy outcomes.
  • Psychotherapy fundamentally is about learning new patterns of action, thought, and emotion. Cognitive behavioral therapy operates on the understanding that events in the world cause not individuals’ reactions but their interpretations or representations in the brain. These can be changed by questioning automatic interpretations and taking action to test whether the interpretation had been correct, a process that can be formalized as Bayesian belief updating.
  • Latent cause inference is key to effective reasoning and helps organize information so that people can generalize correctly and as widely as possible to be able to learn from little information. It enables fast learning of new information while averaging noise and maintaining stable knowledge. Because latent cause inference is a strong and fundamental framework for learning and decision-making, alterations in it may be causally related to mental health disorders, offering potential computational biomarkers.
  • Language models in neural networks can inform how symbolic processes of interest to cognitive science are implemented or approximated with increasing fidelity by neural mechanisms.
  • Both language models and humans prefer reasoning outcomes that are plausible regardless of their logical validity. Furthermore, if the training data for the model includes human reasoning errors such as bias, the model will systematically reproduce this behavior, creating an important limitation for clinical applications.
  • Open models are needed for neuroscientists to understand how models generate their behavior, and language models must be trained to be cognitively plausible and relevant for translational applications.
  • Although many discussions of AI focus on the answers that the models deliver, individual reasoners vary dramatically and systematically in the way they reason about abstract and real-world domains—they can spontaneously discover strategies during the task; disagree on whether inferences are sensible; and be affected by exogenous factors, such as time pressure, stress, or sleep deprivation. The reasoning patterns produced by AI architectures often are highly dissimilar to how humans reason, so using AI to understand how humans reason requires AI systems that focus more on the mechanistic processes and algorithmic implementations of reasoning.
  • Processes that require more complex mental models have more opportunities for reasoning processes to be disrupted. Humans reason in ways that often violate the constraints of formal frameworks, and they frequently prioritize considering mentally stimulating possibilities over maintaining truth, which has significant implications for clinical decision support systems.
  • AI systems that learn to reason by being trained on reasoning products are vastly different from AI systems built to mimic human reasoning processes; the latter can help researchers better understand the factors that limit or enhance human reasoning and explain ways in which human reasoning is flexible, adaptable, and prone to error in both healthy and clinical populations.
  • Many aspects of language and thought are ambiguous or causal and difficult to formalize into structured models. Tools that can express uncertainty or ambiguity, such as probabilistic programming languages (sometimes incorporating language model primitives), allow researchers to ask new types of questions about pathological reasoning processes.
  • The potential misuse of such tools should be considered throughout the development process. People may use tools that are unproven or include many errors, but tools that produce inaccurate predictions can be dangerous, particularly in clinical contexts. Making these tools available under open science principles also allows them to be accessed by those who may not consider their limitations, necessitating careful implementation guidelines for translational applications.
  • The reasoning performance of large language models, like human cognition, can be impacted by processing constraints, such as limitations on allowable computation time or output verbosity, which can influence the quality of their outputs and may lead to different error patterns than those observed under unconstrained conditions, mimicking human reasoning under pressure. Trade-offs between speed and accuracy in computational models expose consistent vulnerabilities in all types of rationality. Algorithms that approximate rational inference and decision-making can illuminate both computational limits and human cognitive processes, providing insight into cognitive dysfunction in psychiatric and neurological disorders.