Tuesday, July 31, 2007

Business for Boneheads

I am thinking of capitalizing on the popularity of the "Dummies" series of books by publishing a book called Business for Boneheads. In the book, I'll lay out the secrets of how to create competitive advantage and become a successful business based on mountains of research I've conducted on successful companies. Wanna buy a copy?

Here's a couple of reasons why you should be suspicious of my book (and also any others that claim to tell you how to succeed easily in business). First, you can't really learn anything about what causes success unless you study both successes and failures. Say I found that all of the successful companies I studied had red product logos. Well, I can't say much about any association between red logos and company success unless I know something about how many failed companies had red logos (if none of them did, perhaps I might be on to something).

Second, as Phil Rosenzweig argues in his new book, The Halo Effect, a lot of the conclusions of such a book would likely suffer from the fact that success shapes our perceptions. Let's say I interview managers from successful and unsuccessful companies about their strategic planning process. The successful managers would probably describe their clear, well-funtioning planning process as one of the factors contributing to their success while the unsuccessful managers would likely describe their muddled, problematic process as contributing to their failure. So, clear planning processes are critical to success. But, wait. Rosenzweig's point is that we would likely see this result even if the companies followed the exact same process. His book is definitely worth a read!

The final reason you should avoid my book is that no publicly available knowledge is going to help you create a competitive advantage. Let's say I correctly discover that having a CMEO (Chief Managerial Economics Officer) in your company inevitably led to competitive advantage in companies I studied. So you, the astute reader, decide to hire a CMEO for your business and no competitive advantage follows. What happened? Well, your competitor probably heard about the CMEO "secret" as well and hired one too. Now that everyone knows about it, no advantage is possible. Competitive advantage flows from having something that competitors can't easily duplicate - you're not likely to find these on the shelves of your local bookstore.


  1. I would also add "Fooled By Randomness" to your suggested reading list (and "The Black Swan").

    Taleb's argument is you could do everything "right" and still fail just based on bad luck. Luck (randomness) plays a huge role in success when a small sample size is involved.

  2. Thanks for stealing the topic for a future post! I've read both of Taleb's books and second your recommendation.

  3. Same thing happens in Medicine:
    Why so much medical research is rot (From The Economist print edition)

    PEOPLE born under the astrological sign of Leo are 15% more likely to be admitted to hospital with gastric bleeding than those born under the other 11 signs. Sagittarians are 38% more likely than others to land up there because of a broken arm. Those are the conclusions that many medical researchers would be forced to make from a set of data presented to the American Association for the Advancement of Science by Peter Austin of the Institute for Clinical Evaluative Sciences in Toronto. At least, they would be forced to draw them if they applied the lax statistical methods of their own work to the records of hospital admissions in Ontario, Canada, used by Dr Austin.

    Dr Austin, of course, does not draw those conclusions. His point was to shock medical researchers into using better statistics, because the ones they routinely employ today run the risk of identifying relationships when, in fact, there are none. He also wanted to explain why so many health claims that look important when they are first made are not substantiated in later studies.

    The confusion arises because each result is tested separately to see how likely, in statistical terms, it was to have happened by chance. If that likelihood is below a certain threshold, typically 5%, then the convention is that an effect is "real". And that is fine if only one hypothesis is being tested. But if, say, 20 are being tested at the same time, then on average one of them will be accepted as provisionally true, even though it is not.

    In his own study, Dr Austin tested 24 hypotheses, two for each astrological sign. He was looking for instances in which a certain sign "caused" an increased risk of a particular ailment. The hypotheses about Leos' intestines and Sagittarians' arms were less
    than 5% likely to have come about by chance, satisfying the usual standards of proof of a relationship. However, when he modified his statistical methods to take into account the fact that he was testing 24 hypotheses, not one, the boundary of significance dropped dramatically. At that point, none of the astrological associations remained.

    Unfortunately, many researchers looking for risk factors for diseases are not aware that they need to modify their statistics when they test multiple hypotheses. The consequence of that mistake, as John Ioannidis of the University of Ioannina School of Medicine, in Greece, explained to the meeting, is that a lot of observational health studies-those that go trawling through databases, rather than relying on controlled experiments-cannot be reproduced by other researchers. Previous work by Dr Ioannidis, on six highly cited observational studies, showed that conclusions from five of them were later refuted. In the new work he presented to the meeting, he looked systematically at the causes of bias in such research and confirmed that the results of observational studies are likely to be completely correct only 20% of the time. If such a study tests many hypotheses, the likelihood its conclusions are correct may drop as low as one in 1,000-and studies that appear to find larger effects are likely, in fact, simply to have more bias.

    So, the next time a newspaper headline declares that something is bad for you, read the small print. If the scientists used the wrong statistical method, you may do just as well believing your horoscope.

  4. From a medical research standpoint, ther might be an incentive for them to use lax statistical methods. For instance, the continued flow of grant money to the researcher could be contingent upon significant findings. It all goes back to the three questions - who's making the decisions (researchers), do they have enough information to make the correct decision (I would like to think so), and do they have the incentive to make the right decision (aha!).