Uncategorized


Last week, the Climatatic Research Unit—one of the world’s leading institutions devoted to the study of climate change—suffered a leak. Someone anonymously released thousands of previously secret CRU emails and documents to the internet. Since then, critics of the CRU—including global warming skeptics—have found embarrassing materials among the leaked files. Some emails appear to advocate changing measured temperature data to better fit global warming arguments. Other emails discuss pressuring academic journals to reject papers skeptical of global warming. Still other emails recommend deleting models and data files rather than providing them to a Freedom of Information Act request. The Daily Telegraph has (inevitably) dubbed this scandal Climategate.

What does Climategate mean for global warming? Is the scandal a “smoking gun” as some have suggesting, showing that man-made global warming is not occurring, or perhaps only occurring slowly? Or is it a “tempest in a teapot”, as others claim, certainly embarrassing for the scientists involved but ultimately providing no evidence against man-made global warming? Frankly I don’t understand the science deeply enough to have a useful opinion. But no matter what it means for the science and politics of climate change, Climategate offers lessons for us, for those of us who use models to persuade.

As we describe in our book, business models are used for eight purposes. One of those purposes is analysis, using a business model to gain insight, insight about customers, regulations, productivity improvements, or whatever. Another purpose is persuasion, using a business model to change someone’s mind. Models are used to persuade customers to buy, to persuade stakeholders to invest, and to persuade employees to commit.

CRU created models, models of climate of course, not models of business situations, but models nonetheless. They used these models for both analysis and persuasion. They used their climate models to analyze temperature data, as part of the scientific process, ultimately publishing academic papers based on their analysis. And they used their climate models for persuasion, to convince other scientists, politicians, the media, and the general public that the world is warming, and that fossil fuel emissions are causing that warming.

The CRU kept their models secret. They did not provide them to global warming critics in the scientific community, and in fact they did not provide them to supporters in that community. No one outside CRU had access to the models.

Secret models are fine for analysis. But secret models fail to persuade. Skeptics ask why they should believe the results of your model, if you won’t share it with them. How do I know you haven’t cheated in some way, created a model that simply supports what you want rather than one that is independent of your desires?

So the first lesson of Climategate is share your models. Make the models available to anyone who wants them. And in particular share your models with skeptics. The skeptics are the people who most need to be convinced, and they are the people who are the least likely to be convinced by secret models.

Among the leaked CRU documents were model files, software code that implements the CRU climate models. As outsiders slowly pore over the code, they have found many issues of professionalism: poor programming practices, apparent bugs, and comments that indicate that a later model developer does not understand the code written by an earlier one. These professionalism issues with the CRU models have provided skeptics with more reasons to be skeptical. If the CRU follows poor programming practices, how can we know that the results are accurate? If the CRU personnel who are responsible for the model code do not themselves understand it, how can we outsiders trust it?

The second lesson of Climategate is ensure that your models are professional. Models meant to persuade should be easy to understand, exhibit best practices, and of course be free of bugs. Reasonable people may disagree with the assumptions contained in the models, but those same reasonable people should not have cause to question the professionalism of the models themselves. Only professional models persuade.

The CRU relies on temperature data from many different sources all over the world, collected over many years. Some of the data was collected more than 100 years ago. Altogether the CRU assembled millions of individual temperature measurements. But this raw data was collected by hundreds of different organizations, with varying equipment and varying disciplines. Before the data could be analyzed, it had to be statistically adjusted and normalized, so it could be properly compared.

Since the leak, the adjustments themselves have become controversial. No one is denying the need to do statistical adjustments, but some climatologists have accused the CRU of always adjusting in ways that increase the amount of apparent global warming. They claim that the raw data shows much less warming than the adjusted data, that the global warming is an artifact of the statistical adjustment process not the climate.

There is no easy way to compare the adjustments that the CRU made to other potential adjustments that could have been made to normalize the data. The CRU made assumptions about the adjustments, and perhaps those adjustments were reasonable, or perhaps the assumptions themselves introduced bias. There is no easy way to know.

So the third lesson of Climategate is make it easy for others to change the assumptions. When I create models to persuade, I provide a user interface of assumptions, giving skeptics the ability to change the assumptions I made and understand the effect of each assumption on the outcome. Allowing skeptics to provide their own assumptions is critical to convincing them.

What if the CRU had learned and applied these three lessons? First, they would have given their models to everyone who wanted them. The models could not have been leaked if they were not secret in the first place. Second, the CRU models would have been professional, free of bugs and easy to understand. Doubts about the results of the models would have been averted. Third, the models would have included assumption user interfaces, allowing critics to understand how different assumptions would have changed the outcome.

I suspect that global warming would have been contentious no matter what actions the CRU had taken. The stakes are large—trillions of dollars and thousands of lives—too large to avoid controversy. But certainly if the Climatic Research Unit had learned these three lessons of using models for persuasion, they could have avoided much of the Climategate mess they are suffering today.

I have been noodling lately about the problem of strategy execution. Some organizations know what they need to do, and they know why. But they cannot seem to do it. They cannot make the process changes required to execute the new strategy. (Have you ever worked for such an organization?)

The recent book Strategy and the Fat Smoker is about strategy execution. Its clever title compares the problem of strategy execution to the difficulties faced by fat smokers.  Today, every fat smoker knows he should change. He knows that he should eat more salads and less ice cream.  He knows he should exercise more and watch less TV. And he knows he should quit tobacco. He knows what he should do, and he knows why. He just cannot manage to change his behavior.

Of course there are significant differences between failures of personal strategy execution—like the fat smoker’s—and failures of organizational strategy execution. All organizations are political, and what is best for the organization as a whole is always worse for some business unit, or some function, or some group. Somebody always loses. And the (prospective) losers will lobby hard to prevent the changes required.

The 2004 book Predictable Surprises looks at failed execution of disaster prevention, organizations that knew what to do to prevent a predictable disaster, but that failed to take the appropriate actions. Bazerman and Watkins—the authors of PredictableSurprises—examine some disasters in detail, and probe into why the disasters were not averted. One disaster they examine is the 9/11 terrorist attack. For 20 years, several government agencies were aware of the increasing violence of jihadists, and their increasing willing to strike at distant infidels in the US as well as nearby infidels in Israel, India, Egypt, etc. For 20 years, the FAA (and other government agencies) was aware of weaknesses in air travel security, including the very tactics used by the 9/11 jihadists. The authors claim that 9/11 was not in fact a “failure of imagination” (as famously described by the 9/11 Commission) but rather a failure of execution: the Gore Commission knew what to do in 1996, but could not manage to execute it, particularly in the face of opposition by the airline industry and by Congress.

Can business modeling (and simulation) help? I suspect so, but I am not sure. That is my noodling topic. The business modeling techniques we describe in our book are certainly useful for figuring out what to do, and how to do it. But the four modeling disciplines we describe are not particularly useful in actually changing behavior once the what and the how are known and accepted. We need something additional, a simulation discipline that explicitly addresses the inherent politics, that allows different groups within the organization to make their cases, and that induces action.

We are in the midst of the world’s first recession caused by modeling, a model-driven recession.

We are certainly in the middle of a recession, a declining economy. Some think this downturn is the beginning of a depression, comparable to the depressions of the 1930s or the 1870s. Others think this will be more like the typical post-WWII recessions, lasting 1 or 2 years.

Every recession is different, with different causes, different dynamics, and different outcomes. For example, the 1990-1991 recession was triggered by the oil price shocks of the first Gulf War, and by the failure of 747 savings and loan associations.  (I oversimplify. At the risk of stating the obvious, the American economy is enormously complex, the most complex human-constructed artifact in world history. There are no single causes for anything.) How did today’s recession happen?  This recession started with the big banks and other financial institutions in New York.

In the course of their everyday business, big banks buy securities and take other financial positions. All of these positions carry risks, risks that the positions will decline in value, become worthless, or even lead to large liabilities. Banks attempt to manage their risks, both by limiting the positions they take and by hedging their positions with other positions, positions that will gain value when the former positions lose it.

Starting in the mid-1990s many banks began to use a new form of model to manage their risks: value at risk. VaR models attempted to measure the riskiness of a position in terms of a 1% worst-case. A VaR value of $1 million means that there is a 1% chance of losing more than $1 million on a particular position. VaR values are computed both for individual positions and for portfolios of thousands of positions taken together. Banks used VaR models to determine how much risk they were taking, and how they should hedge that risk. (The business journalist Joe Nocera explains the history behind value at risk, how the banks adopted these models, and what happened next.)

Unfortunately VaR models are fundamentally flawed. There are several problems with value at risk, but the biggest problem is the implicit statistical assumption that financial markets behave as normal distributions. The normal distribution is of course just the good old bell curve from elementary statistics, an accurate model for some phenomena: for example, men’s heights. But normal distributions are known to be a poor model for the price fluctuations of financial instruments. Extreme movements are much more likely on Wall Street than in a normal distribution. (Power law distributions are a better model of financial markets.)

So VaR works fine, except when the markets change quickly. David Einhorn compares VaR to “an airbag that works all the time, except when you have a car accident”.

The consequence of banks using VaR models to manage risk is that they systematically underestimated the risk of big losses. They saw their portfolios as substantially less risky than they turned out to be. Based on the model-driven overoptimistic assessment of the portfolios, the banks acquired more risky securities than they otherwise would have. Then when things turned, everything declined quickly. Once a bank started to experience a larger loss than it thought likely under its VaR models, it sold the assets, leading to lower prices for those assets, and bigger losses for everyone else who held those assets, or similar ones. The losses fed on themselves.

To some extent, this is a typical pattern for a speculative bubble and its subsequent burst. The railroad bubble of the 1870s and the following Panic of 1873 followed essentially the same steps. People were overoptimistic about the value of the railroads that many were building. Then when things turned, everyone tried to sell at the same time, prices declined, and panic ensured. But the bad models that drove the Panic of 1873 were merely in people’s heads; they were poor mental models, not poor computer models. Today’s recession is the first caused by computer models.

We business modelers can take some small comfort here. It was not business models that drove this recession. VaR is a model discipline of capital market valuation. VaR is about the chance that a given security will decline in value. VaR is not about models of corporate goals or processes or business rules, not the kind of business models we describe in the book. Their models were at fault, not ours.

And if their models got us into this fix, maybe our models have a role in getting us out. Phil Gilbert argues that we need to improve the productivity of white collar work to turn the corner on this recession. I think he is right. Historically the greatest increases in productivity have come during recessions. When times are good, everyone focuses on meeting the topline numbers, on selling more and then somehow delivering on what has been sold. Costs and productivity are less important. When times are bad, everyone focuses on the bottomline numbers, on somehow generating more earnings from the meager sales they are able to make. Costs and productivity rule. And since white collar work is vastly larger than blue collar work these days, increasing the productivity of white collar work is essential to improving overall productivity.

We can use business modeling to increase the productivity of white collar work, so fewer people create more value. For example, when business process models are executed in a BPMS, the work can be done faster, better, and more consistently. Our book is about the business value of business modeling, and much of that business value show up as white collar productivity.

In addition to improving white collar productivity, Phil also talks about increasing the transparency of white collar work. He says: “There needs to be a revolution in implementation of processes that bring greater visibility and less risk to all aspects of our businesses. It is no longer acceptable that senior management remain ignorant of the goings-on at even the deepest depths of the organization.” This identification of visibility and risk is new to me, but perhaps it should not be.  We all accept the importance of visibility in the public sector, as the premier means of preventing corruption. In fact the most influential anti-corruption NGO is called Transparency International—their very name reflects the link between visibility and the biggest public sector risk, corruption.

The growth of systemic risk in Wall Street over the last 20 years is not the same as public sector corruption of course. In fact, it is closer to incompetence than corruption. As Michael Lewis says “[Meredith Whitney] just expressed most clearly and loudly a view that was, in retrospect, far more seditious to the financial order than, say, Eliot Spitzer’s campaign against Wall Street corruption. If mere scandal could have destroyed the big Wall Street investment banks, they’d have vanished long ago. This woman wasn’t saying that Wall Street bankers were corrupt. She was saying they were stupid. These people whose job it was to allocate capital apparently didn’t even know how to manage their own.”

But there is a similarity between this model-driven stupidity about risk and simple corruption. Both rely on secrecy. Corruption relies on secret payments from (for example) a favored firm to a government official. The big banks relied on secrecy about the risks they were taking to achieve the profits they reported. If the risks had been visible to everyone—shareholders, other lenders, analysts, regulators—the banks might have avoided this mess.

Perhaps the mess was unavoidable, even with better visibility.  VaR might have driven the banks to make the same blunders, and just made those blunders more rapidly obvious to everyone outside. The increased visibility from business modeling can still help us grow the rest of the economy again. Most of the economy is outside Wall Street, outside of the financial service industry entirely. Those businesses need better visibility of their business processes, the business rules they are using, and the goals and objectives they are trying to achieve. Our business models can deliver that kind of visibility. With that visibility we can turn the corner on this model-driven recession.

Thirty-five years ago, when I was 12, I received a postcard from New York. The postcard read:

Dear David,
The whole country is humming with the news that you are a
fan of mine. I hear about it everywhere, and I think that’s
great.

I wish you all possible success as a science fiction writer
and as an astronomer.

Isaac Asimov

I was stunned.  The Great Man had written me! Wishing me success!

Of course my mom had arranged it. She had written Asimov a week earlier, asking him to write me. And his practice was to reply to fans. Over the course of his 72 years, he wrote an estimated 9000 postcards and letters, many to his fans. Including one to me.

Peter Graham once noted that the “golden age of science fiction was twelve”. For me, twelve was the golden age of Isaac Asimov.  I tried to read everything he had written, all the science fiction of course, but also the popular science and the history. I even tried “Isaac Asimov’s Treasury of Humor” and “Asimov’s Guide to the Bible”. (I did not like either.)

When I was twelve, I did in fact want to be an astronomer and a science fiction writer, as my Asimov postcard attests. But later, I lost interest in the inner workings of the stars, and instead became fascinated with the inner workings of algorithms and software. And starting in my early twenties, I became increasing interested in the dynamics of business. Others read the Wall Street Journal and the Economist out of professional obligation. I read them for fun, trading my youthful passion for tales of space travel and time travel with the sagas of the long decline of General Motors and the rise and fall of AIG.

But through these 35 years, one aspect of my career has not changed: I have wanted to write a book. 35 years ago, I wanted to write science fiction novels. Since then, my ambition changed to nonfiction. I have started several books. I worked for a bit in the early 1990s on a book about business process modeling and business process simulation, with my then colleagues Ralph Welborn and Brian Otis. We never progressed further than a detailed outline, although Ralph later published two books on related topics. In the late 1990s I became convinced that technical recruiting was practiced poorly everywhere, and I knew how to do it better. I started a book on recruiting, but never finished. (It still is practiced poorly, by the way.) Earlier this decade I helped Deutsche Bank launch a startup—Cokinetic Systems—and wrote a book on their innovative RIA technology I3ML. But I never found a publisher for the book. (Cokinetic has since pioneered a rich media platform for air passengers, based on I3ML.)

The fourth attempt was more successful. Our book was published in mid-December. It is not science fiction of course. It is nonfiction about a rather arcane topic, creating models of business situations. But it does include a fictional story within it. We needed examples to illustrate the business modeling techniques and disciplines. Some of the needed examples were supplied by case studies, real business modeling situations that either Ron or I had experienced in our consulting work. But we needed both more examples and different ones than the case studies provided. So we created a single fictional business—Mykonos Dining Corporation—that owns and manages 200 high end restaurants. Neither Ron nor I have any experience running restaurants, but we knew people who did, and more importantly the restaurant business is accessible to a wide audience. Everyone eats in restaurants, and many people have fantasized about opening their own.

Writing a book was much harder than I anticipated. The biggest problem was keeping everything consistent across the 12 chapters: keeping the wording consistent, the structure of the chapters, the models we explain, and the storyline about Mykonos Dining Corp. We made hundreds of changes to keep things consistent. Fortunately for me, Ron has an eye for inconsistency, and spotted the vast majority of the problems.

Isaac Asimov wrote many books during his life. Perhaps 468. Or perhaps 515. There is some controversy about the count, about which books should be included. But in any case, hundreds. He was astonishingly prolific. I am more astonished now, after having written one, that Asimov wrote so many.

One key to Asimov’s productivity was the quality of his first drafts. Another key was the lack of revision.  Apparently Asimov rarely revised anything he wrote. He claimed to be incapable of seeing the flaws in his own sentences, so his first drafts were submitted to his editor, and often published as they were. I do not share that particular weakness. I see the flaws in my own sentences quite well. So I rewrite, and rewrite again.

I know my continual revisions drove Ron crazy. Several times he pointed out that I was revising a paragraph I had already revised. As late as September, we were fixing problems introduced in the production process, and I found some more wording I did not like. Ron tried to talk me out of changing it.

Curiously, Ron was also an Isaac Asimov fan as a boy. While I was reading the Foundation Series in downstate Illinois, he was reading it 6000 miles away in Israel. Ron credits Asimov with his early interest in computers and modeling, and in retrospect, so do I.

I dedicated the book to my daughters, Miranda, Isabel, and Alexandra. Ron’s dedication was to his mother and to the memory of his father. But perhaps we should have both dedicated it to Isaac Asimov.