Interview Roger L. Martin: "When More is not Better"

Author
Photo of Sohrab Salimi
Sohrab Salimi

Reading time
35 Minutes

In February 2021, we were able to win over Roger L. Martin at agile100, who is not only a professor at the renowned Rotman School of Management at Toronto University, but also occupies one of the top places on the Thinkers50 list. In addition, Roger L. Martin is now world-famous for his best-selling books on management and leadership topics.
At agile100, he spoke with our host and Certified Agile Leadership trainer Sohrab Salimi about his latest book "When More is not Better" as well as the topics of leadership, management and agility.

"When more is not Better" Roger L. Martin in a discussion with Sohrab Salimi

Roger L. Martin (agile100 February 2021)

Sohrab:
Before I introduce Roger in more depth, just a few things up front. He has worked in many, many fields, and he has accomplished a lot of things. And did you know that his books are connected? If you go through them one by one, you'll see the connections, they cover different topics, which I think is very interesting because a lot of authors and thinkers are only in one area.
Roger seems to go in depth in many areas, which I admire. In 2017, Roger was named the world's number one management thinker by Thinkers50. I think in 2019 you dropped to number two, but that's still very, very good. And you're a trusted advisor to a lot of CEOs of companies, like Procter & Gamble, and I think you wrote a book with A. G. Lafley at some point, but also advised Lego and Ford among many other companies. You served as a professor and dean at the Rotman School of Management at the University of Toronto. And you were also named Dean of the Year, I think it was in 2013, if I have my facts straight.

Roger:
That's correct.

Sohrab:
And your latest book is "When More Is Not Better", which I really enjoyed reading, and which we're going to cover here this session.... Not cover, but we're going to talk about.
But you've also written a lot of other books, I think 11 books in total, and published a lot of articles in the "Harvard Business Review". Also, I follow, and I urge everyone to follow, is the work that you do on Medium, the free posts that you basically share with everyone, with your thoughts on strategy, mostly around where to play and how to win, which I find very, very interesting. You went to Harvard, and you graduated with a master's in business administration. Currently you live in Florida. I think I got everything right, right?

Roger:
Yes, it all sounds right to me.

Sohrab:
All right, so Roger, let's get started. As I read your book, and especially compared to the other books you've written in the past, I've wondered from time to time, what made you write the book "When More Is Not Better: Overcoming America's Obsession with Economic Efficiency"?

Roger:
Well, I think it's interesting, a lot of people ask me, "Hey, you've written about a range of things. Why the range, and then why this book?' And in my opinion, they all have one theme, which is a model that's being used in the world that's not producing the results that the people who are using the model want.
So my books try to diagnose that model and help people understand why it's not working and then offer them a different model. So if I go back to the first book that people remember me for, "The Opposable Mind," the model that we're taught in leadership is to say, if there's a really difficult decision where it's not easy and neither option is particularly great, then as a leader you just have to buckle down and be tough and make that decision. That's the model. That's widely thought of. It's like Harry S. Truman's "The Buck Stops Here" sign which was on his desk when he was president. And I wrote a book called "The Opposable Mind: Winning Through Integrative Thinking" that says, "No, this model is actually not helpful."

In fact, when you have a really difficult choice, a leader should say, "This is the time I step back and refuse to choose because neither option is good enough. Otherwise, I wouldn't be struggling like this." If one was a great choice and the other was a bad choice, I'd say, "Hey, man, this is easy. I'll take the awesome one." But there's no choice here that's particularly good.
You have to step back and create a new, better answer. And you look at all my books through that lens, if more is not better, explore the model of economics as a machine, a machine that needs to be perfected, and it needs to be perfected through the pursuit of efficiency.
And what I've said is, oops, unfortunately, this model doesn't bring about what you want. In fact, it brings the opposite of what you want. Here's an alternative model. The economy is a complex, adaptive system, and you should constantly tweak it, tweak it, tweak it, instead of striving for perfection, and you'll get better results." So that's how the books are connected. In this one, I just said, "Hey, there's a model here that should produce X and it doesn't." But we keep saying, "Well, we're not doing it hard enough. We're not putting enough resources behind the model. Just a little bit more, and that will get us over the hump." No, it won't. It's just going to lead to more extreme negative consequences. So that was the motivation.

Über das Buch When More is not Better

When more is not Better (Buch von Roger L. Martin)

Sohrab:
That was the motivation. So this overarching theme of exploring models that are very deeply rooted in our society and in our brains that lead to certain ways of thinking but don't deliver what we want or don't lead to the outcomes that we need. And in this case, it was mainly about society, not specifically about an organization, but about a society that needs different outcomes than we are currently experiencing. And to bring that up, you're focusing on the United States. But when I read this from a German perspective, I could see a lot of the same trends that are happening here and also in Europe. And I think it's very well explained.

Now you've already started to address some of the key messages. First, that there is a mechanistic view, the machine on the economy, but also on organizations. And what are some other key messages that you want people to know quickly so we can dive deeper into the book as a whole?

Roger:
Well, one message I have for people is to be very careful with this thing called surrogation. What we often do is have a model where we say, "Well, I think this is how it works." And then we make a measurement tool out of that model, which makes sense because we want to measure whether the model works for us. But then the measurement becomes the model. So, Wells Fargo, the big scandal is a great example of that. The model is, 'We're going to be better off.' Wells Fargo will be better off if we have deep customer relationships.

That's a good model, I would say. And then you say, "Well, how would we know if they're deep?" The answer is, "Ah, it seems that deeper relationships are the ones where the customer has more financial services products with us. They have their mortgage. They have their savings account. They have their credit card, etc. The ones that have more, we have longer, more productive relationships with." Again, all reasonable. And then you have this proxy that says, "Well, we're going to use as one way to measure the depth of a relationship how many accounts they have."

Well, if you remember, it's not the number of accounts that we're after. It's about deep relationships, and that's just one way to measure it. That's fine. But it turns out that man tends to just forget all the rest of that chain, the chain of imperfection, as I say, and says deep relationships equals number of accounts. Then you put in an incentive system to get more accounts per customer, and then your employees start opening accounts for people without asking them because that's inherently a good thing, to have more accounts. That's called surrogation, when you forget how your model works, and you forget that all models are wrong. The moment you create a model, you abstract from reality and say, "Here's a way to think about it." And models are very useful. We model because we want to understand the world more easily and more clearly. But all models are wrong.

Sohrab:
Some of them are helpful.

Roger:
Yes. Some of them are helpful. Exactly. But if you just remember that they're a model, you're fine. And if you remember that a proxy is a proxy, everything's fine. But we forget. So the advice I give is to say don't think that the model is wrong, of a machine, and be really careful about the proxies that you use to determine whether the machine is working the way you want it to. And what we've said are things like proxies, how low have we kept labor costs, how much margin have we taken out of the system that we call waste. Those are clues to our efficiency, but in the end, they're clues that make you less effective in a global sense over time.

Sohrab:
Less effective and less resilient. And we'll get into that. Now, to take the Wells Fargo example, where a proxy basically became an end in itself rather than a means to an end. And we also see this a lot with goals and key outcomes, where there's a goal to begin with, and if you don't balance the key outcomes, each of those key outcomes becomes an end in itself and people forget about the overall goal. So even if you're using more modern and probably even more agile methods to set goals, you can still run into that problem and that challenge if you're not aware of it.

Roger:
I think that's a really good insight, Sohrab. I think that happens all the time. And that's why I tell people to hold their models more delicately and loosely than they normally do, because no model, no measure is going to be perfect by itself. It's the way you apply it. And I think you'd probably be the first to say that if you misapply Agile, in a kind of completely doctrinaire, inflexible way, you can destroy a company just as quickly. And as an Agile enthusiast, you would say, "Oh, that's a terrible thing. This tool should work in the following way. And you should think in the following way." But the person on the other end said, "I'm going to do it the most doctrinaire way." And that's why great chefs don't follow recipes.

Sohrab:
Yeah, they just cook.

Roger:
Great cooks have a recipe in mind. But given the ambient temperature, "The ingredient happens to be a little different this time. We're going to eat at this time," they all get modified. And every model has to have that careful consideration, with that in mind. A model is just a model. It's not right. It's just a way to hopefully help me. So don't let the model rule me, I have to rule the model.

Sohrab:
That's exactly how it is. I have to use the model to think about certain things in a systematic way and then along the way see if it makes sense or not. I love the chef analogy that you provided, because a chef doesn't measure how much sugar and salt and whatever, they're going to constantly taste what the food is really like. And then they'll review it and adjust it along the way. You mentioned another thing when you gave us some of the key messages in the book, and that was the concept of a complex adaptive system.

Roger:
Yes.

Complex adapative Systems

Stacey Matrix über die Organisationstheorie komplexer Systeme

Sohrab:
Those of you who are more familiar with the agile space, and if you've read into it, you're probably familiar with complex adaptive systems. But I wanted to hear from you, Roger, how do you define a complex adaptive system, and where do you see them? You mentioned that in your book, and I'd love for you to share that with other people here.

Roger:
Sure, absolutely. So when you have a system, you can't just tear it to pieces and put it back together and assume it's going to be the same. The pieces are representative of the whole. It works as a whole. It's complex. And by that we mean that the processes of cause and effect are not perfectly understandable and linear. It's not like a gas pedal where you press down on the pedal and the car inevitably speeds up. In many complex adaptive systems, you may press a particular pedal and in three years the system accelerates in a way you didn't expect. And then maybe most importantly, because you could say it's a complex system, but that wouldn't really describe it fully. It's a complex adaptive system, which means that the actors in the system are always adapting to what the system is doing at any given time and what incentives that creates.
And again, it doesn't even have to be human. A rainforest is a complex adaptive system, and trees grow in very strange ways to get out of the shadow of the tree that stands between them and the sun. That is an adaptation to what they have done. And so I would argue that the economy is a complex adaptive system.

I mean, I'm a trained economist, so I'm probably nastier to economists because I'm not nasty to anybody else. I'm nasty to my own field. But economists are so often wrong in their forecasts. And that's because economics is a complex adaptive system. It's really hard to figure out what's causing what. The monthly blue chip economic forecasts, literally all 50 of them in December 2008, were still forecasting economic growth for the economy in 2008. It's amazing. You have to go back and read them and see what they were thinking about, even though they had eight or nine months of good data already, and pretty good data in the last couple of months. And instead of economic growth, it was one of the worst contractions in a hundred years.

So it's complex. The players in it keep adjusting. Whenever the legislature passes a law, there's an immediate adjustment to it.

When Clinton passed the CEO Pay Reform Act that said, "We're going to put the hammer down on executive compensation," that was in 1993, "only $1 million in CEO compensation is deductible for corporate tax purposes. That will stop this incredible increase in CEO pay." What was the adjustment? Pay them a million dollars in cash and give them a whole bunch of stock options that don't count in that formula. And lo and behold, in a very short span of seven years, CEO pay increased tenfold.
So instead of suppressing it, sending it down, slowing it down. It created an acceleration because there was an adjustment. So I wish it would be nice if we knew more about how the economy works, what does what in the economy. It would be nice if people would just behave the way we want them to behave when we legislate. But that's not what we're up against. It just isn't. It is still taught that way. Sohrab, I can guarantee you that it's taught that way in economics courses around the world, that you can draw these curves and you can predict some kind of output and all that stuff. It's all made up, it's all made up.

Sohrab:
I think there's a lot of... And when I say wishful thinking, maybe that sounds too negative. But I think to some extent it's like human nature, because unpredictability is very hard for us to digest.

Roger:
Yes.

Sohrab:
And then we try everything to kind of predict the future and give us certainty. But you mentioned the economy, which is a huge thing. And if we go back to what a lot of people here in the audience are dealing with, like individual projects or product initiatives or maybe even organizational change, none of that is predictable. And we see it day in and day out, that we have a set price, a set budget, a set time, a set scope, and it almost never comes to pass, and then we blame people. Whereas I think if it happens more than once, it's usually a system failure, or in your words, a model failure, right?

Roger:
Yes. Can I just say again that I think you make such good points. I just think that's right. And then when you add that, how do they adjust to these fixed budgets, these fixed schedules? Because if you say that, they're going to do things like very good people who cut corners because they have a boss who says, "Is this piece of code written yet? Is it written? We had a deadline," and the person says, "Okay. Okay. I'll give it to you." But did they test it and use it as much as they should have? No, because they adapted. Their behavior adapted to the structure, which was a boss yelling at them that the code wasn't ready for the 17th of the month. So I just want to emphasize, you're right, you know, badass, Sohrab. And there are great sins that are committed in the hope that we can create certainty where there is no certainty to be had.

Sohrab:
Absolutely. Now, there are many great quotes in this book. But one in particular resonated with me, and that is that there are no side effects. There are only effects. Some of them turn out the way we want them to, and then we give ourselves credit. And some of them turn out differently than we expected, and then we just say, "Oh, that's a side effect."

Roger:
Yeah. Who wouldn't have known that. Yeah.

Compensation systems and their side effects

Sohrab:
Can you expand on that a bit and maybe with some examples of companies, etc. or systems that you cite in the book?

Roger:
Sure. First of all, again, it's a great insight. If your readers haven't read it, or your audience hasn't read it, you should, John Sterman, the great MIT System Dynamics professor. That's his goal, to say there are no side effects. It's just effects that you expected and effects that you didn't expect. And, for example, I mean, people in businesses are often baffled by the things that, say, salespeople will do to sell more goods that they shouldn't have sold. So they say, "Well, the salespeople sold stuff that wasn't in stock instead of selling the stuff that was in stock, and now we have a big problem with unhappy customers."

And then you look at the compensation system for the salespeople, and that's commission-based.

And they get the same commission for the things that are not in stock as they do for the things that are in stock. And the stuff that's not in stock is not in stock because the customers like it more than the stuff that's in stock. And then you wonder why there's this side effect, this unexpected side effect of selling stuff that you don't have in stock. That's not a side effect. It's a direct effect of your system. And so it's a mystery to me how we think with incentive compensation that people are motivated by it, and they're not. So we say we're going to set it up so that they do more of what the incentive tells them to do. So that's the way people are. They will respond to the incentive. But then they will stop at a point that we would want them to stop.

Okay, which way are they going to go? Are they always going to want to get another dollar, or do they not care about the dollars? If they don't care about the dollars, why did we put the incentive compensation in place in the first place? And if they really do care about the dollars, why would we expect them to suddenly stop and not care about earning another dollar? The behaviors that we wish we didn't have are examples of what we think of as side effects, but they were simply effects of the system that we put in place. I mean, executive compensation, stock-based compensation, we say, "We'll give them stock-based compensation, if the shareholders do well, they do well."

Well, it turns out that stock-based compensation has nothing to do with it. So the absolute smartest thing a brand new CEO can do is if they're completely bloodthirsty and they care about their stock-based compensation... and again, if they don't care about their stock-based compensation, why did you give it to them? So we assume they care about it, that's why we give it to them. The smartest thing a CEO can do, if they really want to make money, is the minute they take over a CEO, they say, "Oh my God, now that I'm in and I've looked under the covers, it's a disaster. Everything is just terrible, terrible, and we need to do a big, big restructuring, whatever, to tank the stock right now." Let's say it's 100, it goes down to 50, and then the CEO does a whole bunch of things that maybe he didn't need to do, and the stock goes back up to 100, and he or she made a lot on their stock-based compensation, and the shareholders made exactly zero. So, yeah, we have these kind of silly things where we're willing to have these very, very tight models where we think the model is going to produce the outcome that we think it is, and then we say, "I'm just so surprised that these other things happened." Well, that's not because people are stupid. It's simply because these systems are complex. It's a kind of arrogance that's the problem. It's not stupidity, it's arrogance. It's the arrogance of thinking that we know more than we really do.

Sohrab:
I agree with you. Now I would like to bring another perspective and hear your thoughts on this. I have also served on boards of directors. And on every board that I've served on, we've always had the challenge of setting the compensation for the CEO, the total package. And there was always an effort by some people in the room to make it, 'Oh, we need clear metrics.' And if they hit those metrics, then we know how much they deserve." And I always felt that if we just have these metrics, and we just talk about it and measure it, like it becomes a life of its own, and then it drives certain behaviors, and maybe not the things that we want to see. But my sense was that people want to have these metrics to avoid difficult conversations, because if you're a board member responsible for the CEO's salary, my sense was always that you have to sit him down and tell him why you're giving him more money, which is always a pleasant conversation, or why you're not giving him the money, which is then a difficult conversation. And if you have those metrics, yes, they make that conversation easier. But for the organization that you're ultimately responsible for as a board member, it doesn't make it easier, and maybe it's the wrong thing to do. And I wanted to get your perspective on that, Roger.

Roger:
No. No, no. Again, you're very well off there. And the great finance professor, Mike Jensen, has an interesting take on what you said. He said, yes, people always want performance evaluation, whether it's specifically for compensation or just for performance evaluation, to be completely objective and based on objective, quantifiable data. And they don't like anything else. Anything that's subjective, they don't like.

But Mike Jensen points out that if I were in a position.... so let's say I'm the chairman and you're my CEO, if I'm able to quantify everything that I want you to do as CEO, then I should outsource that job because I can write a service contract to do that job because everything about that job is quantifiable, objective, and quantifiable. So if somebody complains to you about the subjectivity of your feedback, tell them that's the only reason we have to have you as an employee. If you want completely objective feedback, then I'll outsource you. I've always liked that design. But what I would generalize and say is it goes back to the chef. If you want someone in the kitchen to cook a recipe from a recipe book to the letter, you're never going to get a great meal. Are you going to get a non-awful meal? Yes, probably. You're probably going to get a consistent, reliably mediocre meal that's never going to be terrible.

So I would argue that to the extent that you manage the CEO based on quantifiable, objective measures, you're going to get a mediocre CEO. You're going to get somebody who likes to paint by numbers, who likes recipes, who likes that kind of relationship, and they're going to be mediocre. So it's not like it's going to be a disaster. You're guaranteed to have a mediocre CEO, and then you're guaranteed to have a mediocre company. Instead, if I want to have a relationship as a chairman with you as a CEO that's qualitative in nature, hey, it's not like you can't quantify anything. Something will be quantified, but it's qualitative in nature, and I'm adept at having that kind of relationship, then we have a chance to have a fantastic CEO. So it comes down to what you want. Do you want to be fantastic, or are you willing to accept mediocrity? And if it's mediocre, then read John Doerr's book and fill your company with OKRs.

Positive corporate examples in complex adaptive systems

Sohrab:
Okay. So we started with the complex adaptive systems. And we looked at some examples. You gave Wells Fargo and CEO compensation and so forth as negative examples of how not to deal with a complex adaptive system. But in your book you also give a lot of positive examples. For example, you go into Four Seasons as one of the most profitable, I think the most profitable, the most profitable hotel chain, then Southwest Airlines. You talk about the... I think it's a Canadian banking regulatory system, as one thing, if I understand the name correctly, and then also your own work at the Rotman School, where you're trying to create something different as a dean. Can you dive deeper into some of those examples so that people can also get a sense of how to deal with complex adaptive systems?

Roger:
Sure. So, yeah, if I take the Four Seasons as an example, and yes, it's the most successful, largest, most profitable luxury hotel chain in the world, which was actually founded by a Canadian, Issy Sharp, and I think the secret to the success of Four Seasons is the understanding that he managed a complex adaptive system. And so he introduced a way of working that made connections between things that other people didn't. So he said, "The only way we can get our hotel staff to treat our guests the way we'd like them to be treated every day is to treat our staff that way," which means having a holistic view of the staff, how they think we need them to do this, how can we do that. And he also said, "Well, if we're going to have great employees, they need to be around for a while to learn how to be great employees." And in the industry, unfortunately, the turnover rate is 70%. So when you meet someone in a hotel, an ordinary hotel, anywhere in the world, you can assume that you're talking to someone who's on their way to a 16-month career in that hotel chain. Imagine that. You're having to rehire and retrain all the time.

He said, "Well, we can't have that. So, what do we do?" Well, they invest much more in the selection process, they invest much more in the training, in the career path, so their turnover, instead of 70%, is 5%. So the average person in Four Seasons is on track for a 20-year career in Four Seasons. Imagine how much better they can get at their job if they're a great Four Seasons service provider, if they're on the path to a 20-year career. So to me, everybody says this is a holistic system where everything is connected to everything else, and we need to think about that. And I would argue that we need to think about it from a human standpoint. We have to think about how it can be that a person that we need and want shows up at our doorstep and says, "I'd like a job with you," and then how it can be that that person holds out for that job, and how it can be that that person does the things in that job that we would like them to do. That's a context where we have to dig deep to create a system that works for them. You can't set up a system that involves people who are not human.

Sohrab:
In any case.

Roger:
And maximizing shareholder value is one. There are, I think, really foolish CEOs who think that their workers are going to jump out of bed every morning because he or she said, "Our number one goal is to maximize shareholder value." So they imagine their employees jumping out of bed in the morning and saying, "I'm going to work to maximize shareholder value." And then that employee might ask, "Yeah, but who are these people anyway? And..."

Sohrab:
Who are these magical shareholders?

Who are the shareholders seeking to maximize profits?

Roger:
Right. And the answer, if you look at the stock register, is people like Fidelity, Black Rock, State Street. Vanguard. Well, they're not really your shareholders. They're trustees holding those shares for somebody else. So you don't even know who they actually are. And then when you ask, "Well, what's our relationship with these folks? Oh, they trade in and out of the shares at will, without declaring that they're happy or sad or anything. They're just trading. Okay. So I'm supposed to hop out of bed in the morning to work for nameless, faceless people with whom I have a relationship more akin to anonymous sex than marriage. And that's why I'm supposed to be excited to do whatever is possible. And if we make less in a quarter, you fire a few hundred of us to get your costs under control? I'm supposed to be motivated?" That's a description of a system where there is no humanity.

I'm not saying it's inhumane. I'm not making a value judgment and saying it's inhuman. It's non-human or a-human. It assumes that you have automatons working for you. And you don't. And that's a central challenge in this modern world, where Peter Drucker predicted this. He said, "Oh," he said in 1956, "there's this new breed of worker, they're called knowledge workers. The muscle they have to use at work is not their arms or their legs or their backs. It's the muscle between their ears. And these knowledge workers can't distance themselves from their work. Unlike someone who is a physical worker, they can say, "I moved a bunch of rocks." If you have a software engineer, their software is them. It comes out of her head. It's what they're invested in. And Peter Drucker admonished the world. He said, "We need to think about these people differently, as people who, if we need them to use their brains in this way, we need to treat them as if they were volunteering to work for us. As if they were a volunteer." And despite this warning so many years ago, coming from the greatest management thinker of all time, the world persists in treating people in this non-human way. And that's what we get, workers who are unengaged and somehow managed by surrogates who make them the worst they can be instead of the best they can be.

Sohrab:
And I think that goes back to the model that you were referring to. It's not just the machine model that we apply to business, we also look at each individual organization as a machine, much like when Ford was building cars. The raw materials come in, then people do certain things, and then the cars come out. Today, the product that comes out of many organizations, even if they're producing hardware, the product life cycles are so much faster than a Model T that's been built more or less unchanged for 20 years. And I love that quote from Henry Ford, not because I think it's the right thing to say, but because it shows so well how people thought back then. He said, at least it's written that he said it, "It's a pity that when you need a pair of hands, they're connected to a brain."

Roger:
Absolutely.

Sohrab:
Every time I say that to executives I work with, they say, "Yeah, but right now we don't need the pair of hands. We need the brain." I then say, "Yes, exactly. That's what you need." So you have to use a very different mindset when you're managing or leading a knowledge worker, or whatever term you want to use. And, again, even between a knowledge worker, I think there's a new kind of worker, because a tax accountant is a knowledge worker, but they're doing repetitive stuff, somebody who's building software, or creating great user experience design, or whatever, like the creative economy is even more complex than the normal knowledge worker doing their job.

Roger:
Yeah, I agree. I agree. And if you're interested in reading more about this, if you haven't already, read my 2009 book, Design of Business, because I argue that all of this knowledge flows through a funnel, from a mystery to a heuristic, to an algorithm. And now, since the advent of digital computers, it's becoming an algorithm, you code it where we get it, sort of AI and machine learning, et cetera. So what you're saying is there are things in the category of heuristics where you need a knowledge worker who can deal with the complexity. There are no rules yet. You need their creativity. It moves in due course, some of it into the category of algorithms. And that's what you're saying, these repetitive white collar tasks. And these repetitive white collar tasks are all being reduced to software. And that's my point about how knowledge flows throughout the economy.

Everything flows from a mystery. We have no idea how that works. It used to be a mystery, "Hmm, why does it fall down when I let this go?" There were all kinds of theories, animal spirits, all objects, love for Mother Earth, all these things. Then some smart guy got hit in the head by an apple and said, "Ah, there's a universal..." Sir Isaac Newton: "There is a universal force called gravity that pushes everything down." That's a heuristic. It's no longer a mystery. It's a heuristic. And then we study that enough and find that everywhere except America, things are accelerating 9.8 feet per second squared. In America, of course, it's 32 feet per second squared because America is exceptional. And then we have an algorithm. And then because we have that algorithm, Honeywell can create software that makes aircraft follow the sky in a user-friendly way. And you now have a kind of automated flight system.

Every bit of knowledge in the world is following that path. Some of it is still a mystery and hopefully will never become software. What is love? But everything tends in that direction. Now, what people worry about more than I would is watching things go from heuristics to algorithms, to code, and saying all our jobs are going to go away. I don't see it that way. The world has an infinite number of puzzles to solve. And we will always need people to solve the puzzles, because only when they have the algorithms can we have a computer do something useful with them. But people are concerned about that.

There was a huge outcry from people in the United States when the first automatic washing machines appeared. And the concern was that before automatic washing machines, the average housewife in America was spending two hours every day doing laundry. And there was a huge concern that they would be at the end of their rope if they could just throw the laundry into an automatic washing machine.

Sohrab:
What to do with those two hours?

Roger:
They thought of lots of more productive things they could do. And that's the story of technological progress, as far as I'm concerned.

Who is responsible for the strategy of the company and a product?

Sohrab:
Yes. But that could also be a model that turns out to be wrong. And we'll have to see how things develop over time.

Roger:
In any case.

Sohrab:
Before we get to some of the last things you mentioned in your book, I wanted to ask a side question. And we kind of started with me asking you, you wrote about a lot of things. But one of the main things that I saw at least in your work and how I connected to your work was always the theme of strategy. How do you connect this book, When More Is Not Better, to your overarching theme of strategy? And what does it mean specifically for CEOs, but also for other executives who need to develop and implement strategy?

Roger:
Sure. So I think the connection I would say is that "When More Is Not Better" should in some ways help an executive or a CEO, someone who's responsible for strategy, to take an approach that says strategy is a model. So Southwest has a model. Jack Bogle of Vanguard had a model for the business that they then built their business around a set of principles of that model. "When More Is Not Better" is a warning that you shouldn't get too enamored with your model because it's going to be flawed and there will be adjustments. So you should be willing to keep tweaking and improving your strategy.

Now does that mean you revise your strategy every five minutes? No. But it does mean that you should constantly tweak it because the forces of adaptation are unstoppable. And the likelihood of your model capturing all the complex relationships between, I don't know, the channel and the buyers and the cost structure, whatever, the likelihood of me getting it completely right is zero. So you should watch the signs and optimize. That said, I still disagree with the thesis that your strategy should be completely emergent, meaning we just figure it out and then change it. I think that's kind of too nihilistic for me. I think you should always say, 'Based on everything I know today, this is my model.' And this is where I'm going to strike based on that model." But I'm not going to put blinders on. I'm going to say that I'm watching it and thinking about it and also acknowledging what John Sherman said, that there are going to be side effects that you didn't anticipate. That is just the nature of a complex world.

Sohrab:
Of a system.

Roger:
To ignore them and say, "Oh, it'll go away," is a stupid idea. The best idea is to say, "In what way is this thing I think an anomaly, not what I expected. What if it actually continues in its current thing? What would I do now to adjust my strategy based on that?"

Sohrab:
And I think it also depends a little bit on what level you define your strategy. So what I really like and what I usually teach when I'm working with people who are working on products is a quote from Jeff Bezos, "Be stubborn about the vision and flexible about the details." You still need that vision, and you're not going to adjust that vision every five minutes just because you heard one thing. So you need to give some kind of guidance, some kind of direction on where you want to go. But within that direction, you have to check and adjust. And maybe sometimes you realize that your model was wrong and then you make a major adjustment, which may or may not be a pivot. But you need some kind of thing to work towards.

Roger:
Yeah. And can I just say, on the Bezos thing, I mean, in some ways, he's adopting what I think is a completely flawed management model, which is the idea that this one decides. But he's adapting in a good way. He says be flexible on the details. Well, why would you be flexible on the details? Well, the answer is because you have to make a strategic decision at that level. So I'm against the model that says there's something called strategy and then there's something called execution. I don't believe in that. What Bezos is talking about is making some strategic decisions at the Amazon level, and that brings the need to make other decisions. I call that strategy decisions. Other people call it execution. But then, because other people call it execution, Jeff Bezos has to make this big point about being flexible.

The only reason you have to be flexible is because you can't define from your high level exactly what decisions they need to make. Then you're actually going to infantilize them and you're going to be wrong. You have to say, "Do something that's consistent with my strategy, but you have to make decisions." So I say senior management's job is not just to make decisions that they can make better than anybody below them, but just make those decisions, and then commit to the next level decisions. Say, "You guys at the next level have to make a decision. And the output must be consistent with my choice. But what exactly you choose, that's up to you. And the reason is you know more about your level than I do. But it has to be consistent with mine."

Sohrab:
Exactly. You give the guidance, and you decentralize the decision making to where the information is.

Approaches to solving complex adaptive systems

Roger:
Exactly. Sohrab: Exactly. In fact, Four Seasons is successful because they ask bellhops to make basic decisions. If the guy comes in and says, "I have to be upstairs in two seconds for a meeting. Can you take care of my car?" They have to decide in a split second whether to say "Yes, sir" or "More like no, sir." You can't always say "Yes, sir," because it could be an idiot doing something stupid, or it could be one of your great guests who has always been loyal to the Four Seasons and now needs a favor. And the bellhop is going to have to make that decision. The bellhop is going to be trained to make that decision and he's going to be told, "This is how we want you to think about this decision. But the application of that framework to make the decision is entirely up to you."

Well, if you're only there for 16 months, how is that going to be done? Terrible. But if you're on your way to a 20-year career, chances are you're going to do well. And it's similar at Costco. There's a tremendous amount of decision-making that gets kind of pushed down. And one of the things Costco says is, "By the way, we don't hire anybody from the outside." So if you're on the shop floor, in the store, and you have a track record of making good decisions....

Sohrab:
You make it to the next level.

Roger:
Yeah. And you know what, then you make it to the next level, and you know what, one day you can be CEO. Whereas most of their competitors say, "We hire future CEOs and/or executive development programs from Harvard, or Stanford, or Wharton, or Columbia Business School." Those are different approaches to a complex adaptive system.

Sohrab:
So Roger, being aware of your time and our participants' time, I have questions for at least another hour, but our timebox is at the end. And I want to give you the last word. You mentioned that your reason for writing this book was mostly because there was a model that was wrong and you wanted to make people aware of it. What I also got a lot out of the book is that we are out of balance as a society and we need to recalibrate and rebalance between efficiency, effectiveness, and resilience. So there are multiple dimensions, but we are very much in the efficient realm. And I'd like to hear a few words from your side on that before we close today's event.

Roger:
Maybe I'm just going back to a history. There are a lot of things in my life that I don't think I really understand or appreciate at the time, and I just store them somewhere in the back of my brain. But they keep resonating. And this was in the mid-80s, so a long time ago. I was doing training programs on strategy for Bell Northern Research, the Canadian version of Bell Labs. And a very smart British engineer that I was working with, we were talking over dinner one night and he started explaining something to me... because he was an engineer, a Bell Northern Research engineer, and he was really interested in systems. And he said, "Roger, you have to understand that when systems break down for reasons you don't understand, it's typically because they're reaching their tolerable limits. And you can patch this thing or patch that thing and think you've solved it, but you haven't. It's running too hot, if you will." And I just salted that away. And that's, I think, how I feel, Sohrab, that we're pushing it to its limits.

Now, all the people who are advocating for environmental sustainability, of course, they say that. They say, "Well, we're close to the limits. That's why bad things happen, because we haven't left enough and we don't care enough about resilience." So I think that with our economy, that bad things are going to happen, big capital market crashes, big corporate crashes, that workers are going to be miserable, is because we're just not being mindful enough with the system. And we're letting it run hot. And one manifestation of that is that we push the efficiency button so hard. And the interesting thing to me is that we just don't have to do that. The examples I gave, the Book of Costcos and the Four Seasons of the world, they do it without stress or effort. When you go into a Costco, and I'm sure most people here have gone into a Costco at least once, it's like people aren't running around, employees aren't running around, and so on. It's pretty quiet. And that's because they've developed a human system to do a lot of things in a human way. And if we could bring back that sense of a human way of doing things that really values the human, then I think we'd be moving away from a cliff that we're heading toward.

And that's my hope for the book, that it encourages us to go back and find a smarter way of doing things rather than the brute force way, because I feel like the brute force way brings us closer to a system that just doesn't work anymore.

Sohrab:
Great words, Roger. Thank you so much for participating and taking the time to share with us. I appreciate it very much. And I hope to have another conversation with you someday.

Roger:
I would love to do that, Sohrab, because you do such a good job. So you've been a wonderful host. I really appreciate the work it takes to be such a good host. And I would be happy to do it again.

Sohrab:
Thank you so much.

Roger:
Good luck. Good luck with everything.

Sohrab:
Thank you.

Agile Leadership

=> Book a course on Agile Leadership at Agile Academy

Psychological Safety in Times of Crisis

=> Gitte Klitgaard about how to cope when a crisis comes.

Untapped Agility

=> How to transform your transformation with Jesse Fewell

Related articles

Leading an Agile Transformation

Learn how to Lead an Agile Transformation with these tipps from our agile leadership trainer Sohrab Salimi!

The agile ministry: An Interview with Sebnem Andresen

Can a ministry be agile? Yes, says our interview partner and Work4Germany Fellow Sebnem Andresen.

Servant Leadership: How Does Leadership Work in an Agile Context?

What is Servant Leadership and why is it so essential in Scrum and Agile? We'll answer your most pressing questions about Servant Leadership in the Blog!