Yesterday I was doing an interview with a journalist from Flanders. At one point he asked me about something I guess he’d found at Wikipedia, a reference to an article titled “Tom Peters’ True Confessions,” relative to In Search of Excellence. The callout, repeated on the cover of the magazine, was “We faked the data.” It’s my own fault that such a line ended up in print—though I could live without the guy who concocted it.
A few years ago, as I was working with Fast Company co-founder Alan Webber on the article, we were talking about In Search of Excellence, and the selection of companies for the book. In Jim Collins’ Good to Great, he claims he started with a list of the top 1,000 companies (or some such) and, applying some hurdles (see below, re “hurdles”), came up with his sample. I said to Alan that we’d not done such a purported “scientific” thing. Instead, we’d gone around to McKinsey colleagues, academics, corporate types we knew, and so on, and asked about companies they thought were doing exceptional stuff (e.g., one of our neighboring firms was HP, then a fresh-caught $1 billion company—we put them on the list because they had a lot of out-of-the-ordinary practices, especially by 1979’s standards). Thus, we were “unscientific” by some measures (scientific by the standards of “exploratory research,” which this was) in developing a list of candidate companies. However, after we had our roughly 100 “nominees” we subjected them to steep long-term financial hurdles described in the book, and we were forced to prune the list to the final 43. And that’s the story of our methodology, take it or leave it. Since our goal, demanded by our client, Siemens, was to find “interesting” “good” companies to analyze, we thought this was as good a way to go as any other—though there were, of course, a hundred ways we could have gone.
At some later point, Alan and I, in a rambling discussion, got onto the topic of “lies, damn lies, and statistics”—statistics, weird as it is, are a major hobby of mine. That’s when I said something like, “Of course we know all this [Jim’s way or ours] is to some extent phony baloney. That is, if you try enough variations of plausible, tough long-term financial hurdles—e.g., 10 years or 20 as the baseline—you can significantly influence the outcome.” And that, of course, is true, as any business analyst of any seniority knows—change an assumption by a dab here and a smidgen there, and a questionable project looks like the pot of gold at the end of the rainbow—Defense acquisition projects being one glaring example. I suspect it was this latter discussion that may have influenced the headline writer.
So it goes, and thence it’s my own damn fault. I unhesitatingly acknowledge that in the social sciences it’s not too hard to reach varied results depending on the measures you decide to use. But that’s a country mile from “We faked the data.” So you can take that explanation, or leave it, but there it is.