Simulations and Mechanisms

blog archive
Author

Tom Slee

Published

March 24, 2009

Note

This page has been migrated from an earlier version of this site. Links and images may be broken.

I’ve learned two lessons in the last couple of days.

First, if you want to get some attention for a blog post, call it something eschatological like “Online Monoculture and the End of the Niche”. If I had called it “Simulation of a 48-product market under simplistic assumptions” somehow I don’t think I would be writing a follow up. I don’t like this lesson much. But I don’t feel too guilty: if I was really trolling for traffic I could have called it “Learning from the Big Penis Book” [see Music Machinery for why].

Second, no matter how hard you try to be clear, many people don’t get what you are trying to say. So maybe it’s not their fault. For examples, see some of the comments here and here and even a bit here and on the original. The main complaint is that picking two example runs from a simplistic simulation of a small system with a small and fixed number of customers and products doesn’t simulate the entire Internet. Where is the statistical sampling, the exploration of the sensitivity to parameters, the validating of the recommendation model? And on and on.

These folks don’t get why people do simple models of complex things.

The goal of simulations is not always to reproduce reality as closely as possible. In fact, building a finely-tuned, elaborate model of a particular phenomenon actually gets in the way of finding generalizations, commonalities, and trends, because with an accurate model you cannot find commonalities.

For example (and I’m not comparing my little blog post to any of these people’s work), in chemistry, Roald Hoffmann got a Nobel Prize and may be the most influential theorist of his generation because he chose to use a highly simplified model of electronic structure (the extended Huckel model). It is well known that the extended Huckel model fails to include the most elementary features needed to reproduce a chemical bond. Yet Hoffman was able to use this simple model to identify and explain huge numbers of trends among chemical structures precisely because it leaves out so many complicating factors. Later work using more sophisticated models like ab initio computations and density functional methods let you do much more accurate studies of individual molecules, but it’s a lot harder to extract a comprehensible model of the broad factors at work.

Or in economics, think of Paul Krugman’s description of an economy with two products (hot dogs and buns). Silly, but justifiably so. In fact, read that piece for a lovely explanation of why such a thought experiment is worthwhile.

Or elsewhere in social sciences, think of Thomas Schelling’s explorations of selection and sorting in Micromotives and Macrobehaviour, or of Robert Axelrod’s brilliantly overreaching The Evolution of Cooperation, which built a whole set of theories on a single two-choice game and influenced a generation of political scientists in the process. All these efforts work precisely because they look at simple and even unrealistic models. That’s the only way you can capture mechanisms: general causes that lead to particular outcomes. More precise models would not improve these works - they would just obscure the insights.

That said, there are valid questions. Under some circumstances, aggregating large numbers of opinions into a single recommendation can give this odd combination of broader individual horizons and a narrower overall culture. Are there demonstrable cases of the monopoly populism model out there in the wild (aside from the big penis book)? Is this a common phenomenon or an uninteresting curiosity? Well I don’t know. I do think so, obviously, otherwise I would not have written the post. But it’s a hunch, a hypothesis, a suggestion, that I find intriguing and which I may or may not try to follow up. Hey, it’s a blog post, not an academic paper.