Last reviewed on April 24, 2026.
The article on decision making and cognitive biases covers one face of human thinking — the part where a person picks among options under uncertainty. The other face is reasoning proper: working through a problem when the answer is not in front of you, the path is not obvious and several false starts are likely. Cognitive science has a long-running research programme on this kind of cognition, organised around a small set of recurring questions.
This page introduces the main concepts the field uses, the kinds of problem that have shaped the research, and the core distinction between reasoning that runs by rule and reasoning that runs by example.
Problems, search and the cost of thinking
The classical framing comes from the work of Allen Newell and Herbert Simon, and it still organises much of the field. A problem is defined by an initial state, a goal state, and a set of operators that transform one state into another. Solving the problem is a search through the space of states reachable by applying operators.
Three things follow from this framing.
- The hardest part is often representation. Once the problem is set up well, search is mechanical. Setting it up well — choosing what counts as a state, what counts as an operator, what to ignore — is the part where human reasoners differ most from each other and where insight happens.
- The search space is almost always too big to explore exhaustively. Even toy problems have spaces too large for brute force; real-world problems are exponentially worse. So all serious problem-solving uses heuristics — rules of thumb that prune the search.
- Heuristics trade off completeness for speed. A good heuristic finds answers quickly when it works and fails predictably when it does not. The art is matching the heuristic to the structure of the problem.
Insight: when the problem suddenly looks different
Some problems do not yield to incremental search. They yield to a sudden restructuring — the moment when "oh, it's a triangle, not a square" turns a hard problem into an easy one. The Gestalt psychologists called this insight, and it has been studied with classic puzzles: the nine-dot problem, the candle problem, the matchstick arithmetic task. What these tasks have in common is that the obvious framing is wrong, and progress requires abandoning it.
Insight has resisted easy explanation. Some researchers treat it as ordinary search through a poorly-structured space; others argue it requires a distinct mechanism — a sudden change in the way the problem is represented. There is good evidence that incubation periods (taking a break) help, that anxiety hurts, and that the felt experience of "aha" is correlated with specific neural signatures. There is much less consensus on what underlying process produces the restructuring.
Analogy: solving by precedent
Much of human reasoning is not from first principles but from prior cases. A doctor who has seen this constellation of symptoms before is doing something different from a doctor reasoning out the differential diagnosis from scratch. A programmer who recognises a problem as "essentially graph traversal" has done most of the work just by mapping the new problem to a familiar one. Analogical reasoning is the cognitive science term for this transfer.
The research, much of it associated with Dedre Gentner and Keith Holyoak, has converged on a few robust observations.
- Surface similarity is what people notice first. Two problems that share concrete details — same characters, same scenery — feel similar even when their underlying structure is different.
- Structural similarity is what actually transfers. What you want to recognise is that the new problem has the same relations among its parts as the old one, even if the parts look nothing alike. This recognition is harder than noticing surface similarity, and is the limiting step for most analogical transfer.
- Explicit comparison helps. Asking learners to compare two examples and articulate what is similar reliably improves later transfer. This is one of the more robust applied findings in education research.
Deduction, induction and the limits of pure logic
Reasoning by rule is the part of cognition where philosophy and cognitive science most obviously meet. The classical distinctions are old and useful.
- Deductive reasoning moves from premises to conclusions that must be true if the premises are. From "all crows are black" and "this is a crow," it follows that this is black. Deductive validity is a property of the inference; the truth of the premises is a separate question.
- Inductive reasoning moves from observed cases to general claims. From many black crows, one infers that crows are generally black. The conclusion can always be wrong; the question is how confident the data warrant being.
- Abductive reasoning moves from an observation to the best explanation of it. From "the lawn is wet," one infers that it probably rained, even though it could have been a sprinkler. Most everyday reasoning is abductive in form.
Cognitive science has shown, repeatedly, that human reasoners do not closely follow formal logic when it diverges from the way the problem is naturally interpreted. The Wason selection task is the canonical example: most people get a logically simple version wrong when it is presented abstractly, but get a structurally identical version right when it is presented as a social-rule violation. The lesson is not that humans are bad at logic; it is that what looks like the same problem to a logician is not the same problem to a brain.
A worked example: the Tower of Hanoi
The Tower of Hanoi puzzle — three pegs, several discs of different sizes, move them all from one peg to another without ever putting a larger disc on a smaller one — illustrates several of the ideas at once.
A naïve solver searches forward: try a move, see what happens, try another. With three discs, this works. With seven, the search space is large enough that most people get lost without a strategy. The standard heuristic is means-ends analysis: identify the difference between the current state and the goal, find an operator that reduces it, and apply it. If applying it requires a precondition that does not hold, set up a sub-goal to make the precondition hold first. Means-ends analysis turns a search problem into a recursive plan.
The same problem also illustrates analogy. A solver who sees that the seven-disc problem reduces to "move the top six elsewhere, move the bottom disc, then move the six back" has noticed a structural pattern that the three-disc version trained them on. The puzzle is much easier once the pattern is named.
Common mistakes when reading problem-solving research
- Treating "problem solving" as a single ability. Performance on insight problems, transformation problems, and inductive learning tasks is only weakly correlated. Someone good at one kind of problem is not necessarily good at another.
- Underestimating the cost of representation. Studies that hand participants pre-formatted problems are studying search, not the harder skill of figuring out which problem is in front of you.
- Conflating expertise with intelligence. Expert reasoning relies heavily on stored patterns from a particular domain. Experts are not solving general puzzles faster; they are recognising specific situations.
- Reading laboratory tasks as everyday cognition. Most everyday reasoning is messy, incremental, and embedded in social context. Lab tasks isolate components for study; the components do not always reassemble in real life.
Where this connects on the site
Reasoning sits between several other topics. The mechanisms that make a heuristic feel right are studied as cognitive biases; the memory systems that supply prior cases for analogy are themselves an active research area; the AI attempt to build systems that reason has both inspired and been inspired by this part of cognitive science. The disciplines overview gives the surrounding context, and the glossary defines the key terms used here.