how to solve problems

This is a book by a man with the fantastic name of Wayne Wickelgren – and I want to mention it because Wickelgren was a psychologist and a cognitive scientist, and the problem-solving methods he discusses all tend to map one-to-one to the algorithm design methods that are taught in computer science.

In CSC, the main three are:

Brute Force

Enumerate every possible combination, solution, or problem state – then optimize to determine the actual solution.  This is very slow, but is usually the easiest to think about.  Wickelgren talks about “classifying action sequences” by trial and error, including random trial and error.  A key note from Wickelgren is the idea of limiting your action sequences by ignoring things that have the same result – which is also about the same as memoizing a function.

Dynamic Programming

Solve a smaller version of the problem, then use the result of that to (inductively) build up the solution that you want.  Usually involves some tricky ways of storing the results of the smaller version of the problem.  The book talks about “hill climbing”, mentions getting stuck in local maxima, taking odd detours (sound like graph theory, anyone?), and so on.  Wickelgren likewise points out that evaluating what your end state should be can be tricky.

Divide & Conquer

Split the problem into smaller version, the combine them (usually in a recursive way) to find the solution that you want.  Wickelgren mentions “subgoals”, which are distinct from hill-climbing, as they’re not inductive – there’s even a binary tree as an example!


So those are the big three, which each get a chapter in the book.  The book also mentions proofs by contradiction, and “working backward”, which relates to both dynamic programming and divide & conquer solution – if you start from your solution and work “down”, or start from nothing and work “up”.

There’s likewise a chapter on “relationships between problems”, which is a thing that happens in computer science all the time – both in the software engineering sense of having done this before, and the computer science sense of being able to solve a known or simpler problem to get to the problem you want.

The most striking two points in the book that are not in typical computer science theory are a section about how if you can’t solve a problem, take a break, and all the reasons why taking a break is good for you.  The other point is about inferring new information from the given information, which is a constant bugbear of mine.  What information do you need to have that you don’t have?  And can you get that information from the given data?  Or, (in the software engineering sense) do you need to obtain it from somewhere else?