Howard Aldrich Interview: 2000 OMT Distinguished Scholar

At the August 2000 Academy Meetings, Howard Aldrich received the OMT Distinguished Scholar Award. His address challenged organization scholars to broaden the domain of organizations we study, conduct event -driven research, not just outcome-driven research, and to expand our repertoire of methods. Following the Academy, Aldrich further discussed these important topics with Tim Pollock, winner of the 2000 Lou Pondy Award for Best Paper from a Dissertation.

Interview with Howard Aldrich, 2000 OMT Distinguished Scholar

By Tim Pollock

POLLOCK: In your talk you suggested that we need to provide more realistic pictures of the organizational landscape in our theorizing. Could you elaborate on ways scholars can accomplish this goal while retaining the parsimony necessary of a good theory?

ALDRICH: Well, there are several levels on which we could accomplish that. One way would be to consider the way that research is currently reported by people, and have people become more aware of letting readers know what kind of organizations - shapes, sizes, flavors, styles, whatever - we are learning and reading about. I say this because when I read the sample descriptions in empirical articles, I found it is often quite difficult to get a sense of, let's say, the average size or size distribution of the organizations that were studied, or to get a sense of their ages. When it is a variable-driven article, we might occasionally have means and standard deviations, but even that is sometimes lacking. So, just in the way current research is reported, if people become more aware that we would like to have more information about the variety and diversity of organizations that we are reading about, they could put more information into their work.

Of course, that is a short-term fix. The longer-term issue I was addressing was the skewed nature of the research that does appear in our journals, and the skewed nature of the populations that scholars choose to study. That in turn biases the kinds of theory that we do, because the people who are writing theory are inevitably basing it in part on what they choose from the empirical generalizations available in the literature. If those empirical generalizations are based upon a very small subset of the universe, then the theorizing is also going to be inevitably skewed towards those larger organizations. For example, let’s take leadership. For me, one of the most exciting areas of leadership would be figuring out how millions of small firm owners manage to keep their organizations intact from one day to the next through a variety of circumstances. But when you read the leadership literature, what you see instead is a focus on people who are either managers in very big firms, or worse still, from my point of view, people who are the CEOs of very big firms, not middle managers. So, that whole leadership literature, because of the focus on bigger, established firms, ignores this much larger pool of people who are also leaders by most definitions, but who don't get the attention they deserve.

POLLOCK: You opened your address with a quiz that highlighted just how small the research domain that OMT scholars have been pursuing really is. One of the questions then becomes, if people are interested in conducting studies that broaden the research domain of OMT, how do you get adequate data on small, young, privately held firms and Mom and Pop businesses in a way that is rigorous enough to meet the demands of our field? [Note: The quiz included such questions as How many companies filed tax returns in the U.S. last year? (Answer: approximately 25 million) and How many public companies are there in the U.S.? (Answer: approximately 20,000)]

ALDRICH: I recognize there is a resource issue involved in the kind of information that I would like scholars to collect. If we all had unlimited resources, we would obviously choose to get representative samples and follow them over long periods of time and have an army of people documenting everything that is going on in the organization. Of course we don't have that available to us, so the question is what kinds of compromises do we have to make? I would say that one way to think about this is to realize that questionnaire or survey based research is not the only way to think about the problem. When you are thinking about survey based research, you are thinking about large N studies, typically, and you are thinking right away of the need to compile sampling lists.

One of the things I was trying to get at when talking about event driven research as opposed to outcome driven research is the need for a lot more information about the micro-events, the kind of stuff that Andy Van de Ven has put into his research. That is going to mean spending more time, perhaps, in a smaller set of organizations, maybe using field observation methods instead of survey research methods. That will mean that the sample size will be smaller and that perhaps the amount of time required for that project will be longer. I recognize that. And so that's why in another talk I've given over the years, about the obligations of senior scholars, I've said that we don't just want to say to junior scholars, “okay we haven't done very well at this, and we haven't succeeded. Write us off and you do it better.” It doesn't look too good to junior people who look at us and say, “why didn't you do this stuff when you had the chance? If you didn't do it, why do you expect us to do it?” So I recognize this is true, and I'm partly speaking to the more senior scholars who still have the energy to get up from behind their desks and do one more study. I'm saying perhaps they should go and do some of this more field-based research themselves.

The other issue you are raising is where do we get the information. There are sampling lists now that you can get through Dun & Bradstreet. You can say to them that “I want all of the firms started in a particular industry or a particular region or that were started in a particular set of years” and actually get a contact list of firms that meet all of those criteria. The sampling list itself is not going to cost you very much. You can actually get it for a city, for a county, for a region. That's something that I don’t think we take enough advantage of. People are instead more likely to settle for more convenient samples.

In other words, we do have these very comprehensive and, now, extremely accurate lists that we didn't have in the past. Fifteen, twenty years ago Dun & Bradstreet would have systematically ignored a lot of smaller firms, younger firms. But that's really not true anymore. So somebody really can't say “there's no list available.” Also, in every county in every state there are requirements for firms to get business licenses. For example in the research triangle, when I did a project with one of my students about ten years ago, we went to the Wake County court house and got the list of all the new firms that registered their names at the courthouse. So we actually were able to get, for a particular year, all of the firms, and from that we could then sample, and pick the firms that we wanted to look at. So, it maybe takes a little more field work, a little more ingenuity, but I would claim that it is quite possible to get lists of new and small firms if you put some time into the search effort.

POLLOCK: You mentioned event driven vs. outcome driven research. Could you please explain what you mean by the difference between event driven and outcome driven research, and perhaps suggest some particular issues that junior folks my be interested in exploring, or that doctoral students might want to pursue as dissertation topics in this area that could provide a theoretical contribution?

ALDRICH: Outcome driven research begins with identifying some existing set of firms, some successful firms, firms that have experienced a particular event, and then tries to construct an explanation for why that outcome occurred. Sometimes we think about this as selection bias, but it's a little more subtle than that. Let's say we do a survey of organizations because we want to find out which organizations have adopted a new information technology. So we have a cross sectional, representative sample of the firms, identified as to which ones adopted and which ones didn't. We have also put into our survey some questions about possible antecedent events that might have led them to adopt innovations. The difficulty with that, of course is that we are looking at this at one point in time and we're asking about an outcome that has already occurred, and we are asking, based upon the theory that we have, which events might be associated with this outcome. The difficulty with that situation is that we don't have the process data from such a cross sectional study to let us know really if some things differed systematically in the histories of all these firms that led some to adopt the new technology and some not, and we'll never be able to see that in the cross sectional outcomes that we have.

The alternative would be to have begun with a set of firms that were at risk of adopting this technology, again with a theory based set of questions that we would be asking, or looking into the records they have, one or the other.  Then we say, well, we think that one of the events that might lead to the adoption of new technology might be a new CEO, or a person coming into the organization with a very different educational background than the people in that department, or different industry experiences than people have in that organization. So, then we follow over time all the organizations and watch when those events occur and look at the consequences. If we are able to say “here are the events that actually occurred, and subsequently there either was or wasn't the adoption of a new information technology,” then we can more assuredly say, “yes this model actually works.” So the event driven explanation starts with saying that we've identified the events, and then place them prior in our causal ordering to the outcomes.

POLLOCK: So we need to find situations where the outcome has not occurred yet, and we need to follow these firms over time as they go through the process and see whether or not the event occurs?

ALDRICH: Right. I think that people are familiar already with event history analysis that ecologists have done. Because they have used primarily archival data, they've been stuck with just a few events that can be studied that way. One of them is, of course, death, which you can pretty easily study. (Although what really constitutes “death” is often up for grabs.)  Another one has been CEO succession, which you can study with archival records because you typically get something recorded when the event occurs. So, there are some times when archival information will yield much of what we need. But, there are other kinds of things that you are not going to find in the archives, because the archives were collected for administrative purposes and government bureaus, and the record keeping bureaucrats in organizations are keeping records for themselves, not for researchers. They're keeping them because they need to make decisions or issue orders of some kind. And so it's not so easy to take what they have in their archives and mold it to our purposes.

POLLOCK: So do you have any suggestions for doctoral students who are looking for dissertation topics, where they could pursue some event driven research to answer a particular theoretical question that hasn't been explored adequately, or that has only been explored from an outcome based perspective?

ALDRICH: The first thing I always tell people to do is to look at the last chapter of my 1999 book. In that book, after having gone through 400 odd pages of talking about what an evolutionary perspective can bring to organization theory, I tried in that last chapter to lay out a bunch of issues. So if you look at chapter 12, you'll see about 3 or 4 ideas per page for questions that could be pursued in doctoral research. That's the first thing I would say. That chapter was really written to doctoral students and to junior scholars. I wrote it to say “here are a whole bunch of questions that remain unanswered, even after all this review of the literature and my attempt to integrate and synthesize.” What the book shows is how much we don't know about some of these issues.

Secondly, my own personal interest these days is in human resource issues. I'm still interested in the ways that new firms emerge, and the social interaction that occurs as a firm is coming out of chaos and emerging as a complex entity from the efforts of the founder, founding team, and early recruits to that effort. The process of emergence raises questions of what are the processes that promote coherence, what are the processes that impeded coherence, what are the things that happen in the early days of a young organization that drag out the process, or even doom it? What are the things that people do that can push it forward? For example, I was talking to my friend and occasional co-author Marlena Fiol about this at the meetings. There's a revolution that's occurred in cognitive science in the way we think about cognition. The old way of thinking about cognition, which is reflected to some extent in a few chapters of my book, relies on the concept of schemata, or templates, and the notion that people carry around with them these pre-set conceptual categories they use to code the world. In my book I talk about how these categories get modified in the process of becoming a member of an organization. But there is another way of thinking about cognition, which is a more distributed notion that cognition is actually a very social activity, and it may not be that the categories are so much in the heads of the cognitive agents as they are shared among a set of them, and that no one person in that set carries a fully formed scheme, or template.

This is relevant to new firms because, as I argue in the book, one of the things that has to happen to get a coherent entity going is there has to be some sense in the organization of shared vision, of shared purpose. You can see hints of this in the way that Selznick talked about this issue in his book, Leadership in Administration. In the new way of thinking about cognition, it is much more of a joint product, much more shared, much more of a collective effort. So you can imagine following a start-up firm, for example, and discovering that founding entrepreneurs learn as much from their early employees as they brought to the venture. The definition of what the venture is actually doing, and how it relates to others and its environment, why people are doing what they do, is going to emerge out of the understandings they construct, and that emergent process. That is something that is just very difficult to capture with static, cross sectional designs. It is probably also very difficult to capture with our standard survey instruments, although I don't think it's impossible. But it really would require people to spend time in emerging firms and to follow them over time. But I think that would really be fascinating. Laurie Levesque is doing something like this in her dissertation at Carnegie Mellon.

We just don't know much about the early days of firms. We tend to stress the financial and technological side, and not so much the social side. But clearly practitioners are aware of this. If you read the magazines, journals and business periodicals directed toward practitioners, you see that they spend a lot of time talking about human resources, a lot of time talking about recruiting and hiring and retaining, talking about teamwork and how do you get people who are highly skilled, and who think of themselves as free agents, to sign on long enough to keep them within an organization so that a firm gets the value out of that hire. And I think that people in organization theory have kind of neglected that. We've treated these kinds of issues as human resource management or personnel issues, and said, “well that's for people in the human resource division to deal with.” And really, organizations don't exist for us to study unless they master some of these early problems, one of which is the creation of a coherent entity, the emergence of a set of actors who have a certain understanding of what they are doing.

POLLOCK: One of the things that intrigued me in your address is that you differentiated between measuring intervals in clock time, which is what we typically do, as well as in socially expected duration. Could you talk a little bit more about the concept of socially expected duration and how it could impact the way we develop theory and collect and analyze data? Is there a difference between what we do now, and what we could be doing if we focused more on the socially expected element of duration?

ALDRICH: We have some scholars whose work I borrowed who have already shown the way. I would mention here Connie Gersick's work, and also Barbara Lawrence's work. We have people who have talked about this, building on Robert Merton, who developed the idea. So we have some guidelines to help us with this. Again, it's the idea that there are some normative expectations that people must take into account in their actions. There are some expectations built into universities, for example. At some universities they have a reappointment clock. Faculty get a three year appointment, and if they get renewed they come up for tenure in the fifth or sixth year. Those are socially expected categories in the sense that your career is now set up in three and six year chunks. And Connie Gersick would say, well, if you give people a three year or six year duration in which they are meant to be working, one expectation that we would have is that they would take that expected duration and probably halve it, and they'll time what they do with respect to the midpoint of that interval. So, for example, if you give someone a three year appointment, you would expect that after a year and a half, some kind of panic will set in. People will begin to recognize that they have lost half of the time they've been allotted to prove themselves, and their behavior will change substantially in the second half of that period, as opposed to the first half.

When you look at the start-up process, for example, one way to interpret what happened this past April with the change in expectations that people had about the dot coms is that many of the entrepreneurs that started the dot coms were thinking about their enterprises in a very long time frame. In fact, it's not clear they actually had an endpoint to their thinking! Some of them might have been thinking about the time frame of their ventures in terms of an IPO, so that would have meant they were thinking, "Well I probably have five or six years or so from start-up time to when venture capitalists get involved to when we go public." So they were thinking they had five, six years to prove themselves. That changes the way they behave. For them a month, six months, a year is really not a salient time unit. For them it's the six years that's the salient time unit. So they had no pressure to do anything in a hurry, and they wouldn't have any pressure until they were maybe half way through that. In April, we had what I call in my book a classic period effect. Suddenly, the understanding of the investor environment changes from "we are waiting for an IPO in six years" to saying "we want to see results more quickly." So, all of a sudden these people who were thinking about time in half-decade chunks are now being asked to think of time in terms of months. If I'm a new venture CEO, all of sudden my investors say we want to see some positive cash flow inside of eighteen months. Nothing has changed as far as the fundamental issues about building a business. What has changed is the duration during which people expect that to occur. So the definition of a successful start-up is no longer, "Take six years, build the company up, and have this public event." Now the duration of a successful start-up is: within eighteen months we want to see positive cash flow, or at least we want to see a movement towards positive cash flow, maybe even profitability. Does that make sense?

POLLOCK: Yes, that makes total sense. So in terms of trying to study these firms we need to study the rhythms of life for these companies, and see how they adjust and adapt to them?

ALDRICH: Right. What are the social expectations, what are the community or population expectations? The definition of time is very context specific. In particular, the socially expected definition of “time” affects the urgency with which people will carry out what they are doing. People slow down or speed up what they are doing depending on how close they are to these timelines, or guide posts.

POLLOCK: So one interesting area of inquiry may be the extent to which firms, when they have a shift occur like the one that happened in April, have the ability to adapt and adjust to these changes in social expectations?

Yeah, that's very interesting. In fact I've seen reports, such as from the Gartner Group, saying that something like eighty percent of the dot coms won't be around in a couple of years. That's something that people were not talking about before April. People weren't talking about mortality as a big deal. We knew that mortality was a likely ending event for many of these firms, but there wasn't a sense of urgency about that. And now, you see this prediction being echoed over and over again, increasing the pressure on people to shorten their time horizons and not think about time in terms of half decades, or even years. People are now actually being encouraged to think about time in terms of months.