This page has been proofread, but needs to be validated.

made out of water molecules, which are made out of atoms, which are made out of quarks; computers are made out of circuit boards, which are made out of transistors and capacitors, which are made out of molecules; economies are made out of firms and households, which are made out of agents, which are made out of tissues, which are made out of cells &c.. This view is so attractive, in fact, that a number of philosophers have tried to turn it into a full-fledged metaphysical theory[1]. Again, I want to try to avoid becoming deeply embroiled in the metaphysical debate here, so let's try to skirt those problems as much as possible. Still, might it not be the case that something like degree of hierarchy is a good measure for complexity? After all, it does seem (at first glance) to track our intuitions: more complex systems are those which are "nested" more deeply in this hierarchy. It seems like this might succeed in capturing what it was about the mereological size measure that felt right: things higher up on the hierarchy seem to have (as a general rule) more parts than things lower down on the hierarchy. Moreover, this measure might let us make sense of the most nagging question that made us suspicious of the mereological size measure: how to figure out which parts we ought to count when we're trying to tabulate complexity.

As attractive as this position looks at first, it's difficult to see how it can be made precise enough to serve the purpose to which we want to put it here. Hierarchy as a measure of complexity was first proposed by Herbert Simon back before the field of complex systems theory diverged from the much more interestingly named field of “cybernetics.” It might be useful to actually look at how Simon proposed to recruit hierarchy to explain complexity; the difficulties, I think, are already incipient in his original proposal:


  1. See, e.g., Morgan (1923), Oppenheim & Putnam (1958), and (to a lesser extent) Kim (2002)

58