SSRN-id464762

download SSRN-id464762

of 35

Transcript of SSRN-id464762

  • 7/28/2019 SSRN-id464762

    1/35

    Business School Rankings and Business School Deans*

    C. Edward Fee

    Michigan State [email protected]

    Charles J. HadlockMichigan State University

    [email protected]

    Joshua R. PierceMichigan State University

    [email protected]

    First Draft: September 19, 2003This Draft: October 6, 2003

    * The most recent version of this paper can found at http://nebula.bus.msu.edu:8080/feehadlock.All errors remain our own.

  • 7/28/2019 SSRN-id464762

    2/35

    Business School Rankings and Business School Deans

    ABSTRACT

    We examine the relation between dean turnover and changes in rankings in acomprehensive sample of business schools with ranked MBA programs from 1990-2002. We

    find little evidence that dean departures are related to changes in a school's overall rank in theU.S. News & World Reportrankings. However, dean turnover does appear to be elevated when aschool drops in the Business Week rankings and when a school deteriorates on the placementdimension as measured by U.S. News & World Report. These results are significant in both astatistical and economic sense. Our findings suggest either that schools respond to changes inrankings or that rankings reflect information schools use in their personnel decisions.

    JEL Classification: G30; J41; L20; M10, M51Key words: Business schools, deans, rankings, turnover, performance evaluation

  • 7/28/2019 SSRN-id464762

    3/35

    1

    1. Introduction

    An event that attracts substantial attention from constituents of leading business schools

    is the public release of rankings of MBA programs. The attention these rankings garner and the

    impact rankings may have on students, recruiters, faculty, and administrators has generated a

    lively discussion in the business and academic press. Some of this discussion centers on

    understanding what the various rankings actually measure and debating what the rankings should

    measure. Other discussion considers whether the focus on rankings has any effects on how

    business schools operate and whether or not these effects are welfare improving.

    In this paper we contribute to these discussions by providing systematic empirical

    evidence on the relation between rankings and the career of a school's most visible decision

    maker -- its dean. In particular, we examine whether changes in rankings are associated with the

    likelihood that a dean departs from his/her school. Our analysis allows us to examine the joint

    hypothesis that (a) deans are held accountable for school performance, and (b) that rankings are a

    useful indicator of a dean's performance. By using both theBusiness Week(BW) and U.S. News

    and World Report(USN) rankings along with the components of these rankings, we are able to

    assess which performance metrics appear to be most heavily relied on in evaluating deans.

    Our sample includes the deans of all schools ranked by BW or USN from 1990-2002.

    We use news articles to determine the circumstances surrounding each dean's departure and

    estimate logit models predicting whether dean turnover is related to rankings changes. Our

    estimates reveal no significant evidence that overall performance in the USN rankings affects the

    likelihood of dean turnover. However, poor performance in the BWrankings is associated with

    a significant increase in the likelihood of a dean leaving. Thus, while the relative information

  • 7/28/2019 SSRN-id464762

    4/35

    2

    content of the USN and BW rankings may be debated, schools are behaving as if they are

    evaluating deans based on factors that are captured in the BW ranking measure.

    To further explore these results, we estimate models where dean turnover is a function of

    each individual component of the respective rankings. Interestingly, while performance in the

    overall USN ranking is not associated with turnover, we find that a school's performance on the

    USN placement measure is significantly related to turnover. The three other USN component

    measures are insignificant. A possible explanation for this evidence is that placement success is

    the ultimate market measure of whether a school is effectively managing its operations. When

    we turn to the two component measures of the BW ranking, we find no evidence that a single

    component is driving the overall importance of the BW ranking. Finally, we find little evidence

    of systematic changes in rankings or program characteristics following dean changes.

    Taken as a whole, our evidence suggests that business school deans' preoccupation with

    rankings is well founded. The BW overall rankings and the USN placement metric are both

    related to dean turnover just as stock returns and earnings performance are related to CEO

    turnover. This evidence is consistent with schools believing both that deans can affect rankings

    related metrics and that these metrics are important.

    The rest of our paper is organized as follows. In section 2 we review the specific

    literature on business school rankings and the broader academic literature on performance

    evaluation and management turnover. In section 3 we discuss our sample construction and

    present summary statistics. Section 4 reports our main results on the relation between rankings

    and dean turnover. In section 5 we conclude.

    2. Rankings, business schools, and turnover

  • 7/28/2019 SSRN-id464762

    5/35

    3

    2.1 The information content of rankings

    As interest in business education has grown over time, publishers have expended

    considerable effort towards measuring the quality of MBA programs. Five major news

    organizations currently publish rankings, with theBusiness Week(BW) and U.S. News and

    World Report(USN) rankings being the oldest and most established.1 Business school rankings

    aggregate subjective and objective assessments of a school's inputs (quality of incoming

    students), outputs (quality of graduating students), and the process of turning inputs into outputs

    (the school experience). Each ranking methodology uses different measures and different

    weights on each type of measure, thus creating at times substantial differences between rankings.

    Two recent academic studies have attempted to understand the information content of

    business school rankings. Tracy and Waldfogel (1997) report that the BW and USN rankings are

    both heavily influenced by the quality of a program's students (inputs) and the program's

    placement success (outputs), but not necessarily by the value-added by the program (outputs

    minus inputs). Dichev (1999) studies the time series properties of the BW and USN rankings.

    For both rankings he finds that the change in a school's ranking exhibits strong negative first-

    order autocorrelation. In addition, he reports that changes in BW rankings are not significantly

    correlated with changes in USN rankings. The Dichev (1999) evidence suggests that there is

    significant noise in these highly publicized ranking measures.

    Certainly school rankings have many shortcomings, many owing to the fact that school

    quality is a multidimensional and imprecise notion. Nevertheless, these rankings may contain or

    1 The other rankings are published byForbes, theFinancial Times, and the Wall Street Journal. These rankingshave not existed for a long enough time period to use in our study.

  • 7/28/2019 SSRN-id464762

    6/35

    4

    reflect useful information, or at least market participants may behave as if they do. By

    examining whether schools' choices are related to rankings changes, we can assess whether the

    information in rankings, despite the flaws, is useful or important to schools in assessing

    performance.

    2.2 Business school objectives

    Business school constituents have many diverse and, at times, conflicting objectives. The

    choice of objectives and the relative weights placed on each objective will depend on the school's

    governance structure and the identity of the individuals associated with the school. A magazine

    ranking creates a highly visible performance metric that may place more emphasis on certain

    areas than a school would choose on its own. If a school cares about the rankings, this may

    cause a change in a school's organizational objectives. Along these lines, Brickley and

    Zimmerman (2001) discuss how pressure caused by rankings resulted in a shift towards

    rewarding teaching performance at the University of Rochester business school. They report that

    faculty at the school appear to have responded to the altered incentive scheme. Zimmerman

    (2001) discusses how a fixation on rankings has affected business school strategies and the

    detrimental effect this has had on business school research activities.

    There is little systematic evidence indicating how much business schools and/or their

    deans care about rankings, although anecdotal evidence abounds. Some schools and deans

    conspicuously advertise their rankings, particularly when they are high, while others purport not

    to use rankings as a measure of success. Some deans are publicly overt about their desire to

    increase rankings. For example, after being appointed dean at the University of Maryland

  • 7/28/2019 SSRN-id464762

    7/35

    5

    business school, Howard Frank is quoted as saying "we want to be in the top 10 or top 15"

    (Washington Post, 7/13/1998, page F10). While not all deans are so blunt and some may truly

    not care about rankings, certainly many business faculty believe that their deans care about

    rankings and that this has had an impact on their school's activities.

    Why should deans care about rankings? One possibility is that they derive inherent

    satisfaction from superior performance in rankings. Alternatively, it could be that the labor

    market or the governance system of business schools is oriented towards rewarding excellence in

    rankings or in school characteristics that are highly correlated with rankings (quality of students,

    placement success, student satisfaction, corporate reputation, etc.). If schools are utilizing this

    type of reward structure, it presumably reflects the objectives of the principals who evaluate the

    deans. If schools do not use ranking related information to evaluate deans, this would suggest

    that schools either find these measures to be uninformative or, alternatively, that they have weak

    evaluation and governance systems.

    2.3 Performance evaluation and turnover

    The approach we take in this paper draws heavily from the empirical modeling in the

    literature examining CEO turnover [e.g., Coughlan and Schmidt (1985), Warner, Watts, and

    Wruck (1988), and Weisbach (1988)]. The basic approach is to estimate logit models where the

    dependent variable indicates whether a manager departs from his/her firm and the main

    independent variable of interest is performance. The magnitude of the relation between

    performance and turnover is then interpreted as either a measure of the strength of an

  • 7/28/2019 SSRN-id464762

    8/35

    6

    organization's governance system, or alternatively, as a measure of the degree to which the

    chosen performance metrics provide useful information concerning managerial ability.

    In adapting this methodology to our study, a few considerations are worth discussing.

    For both CEOs and deans, it is often not clear whether an individual was forced to depart or not.

    We will follow the management turnover literature in making the implicit assumption that the

    typical departure is forced unless there is compelling evidence to the contrary.2 However, it is

    important to note here that the distinction between voluntary and involuntary departures is often

    not important, as most job separations can be viewed as two-sided decisions where both parties

    agree that the firm and the manager are a poor match [e.g., Jovanovic (1979)]. Viewed in this

    way, we would interpret a relation between dean turnover and rankings downgrades as evidence

    of a poor dean-school match leading to a job separation.

    With regards to performance measurement, most studies of management turnover assume

    that executives are evaluated based on metrics that reflect shareholder objectives, primarily stock

    returns and/or earnings performance. Boards of directors presumably rely on these metrics

    because of their information content with regards to managerial ability. By using rankings as our

    performance metric, we are testing whether rankings provide or capture relevant information

    concerning the perceived ability of a school's dean. There is, however, a subtle identification

    issue as to the role of rankings in performance evaluation. It could be that the actual publication

    of a ranking provides useful information to evaluate a dean. Alternatively, it could be that the

    ranking simply reflects or publicizes other information that is known to school administrators

    and faculty about the quality of a school's programs. While it is impossible to completely

    2 Supporting this assumption, Fee and Hadlock (2003a) present evidence indicating that most managerial departuresthat are not reported as forced appear similar to forced departures in many respects including post-separation labormarket prospects. The one type of departure that appears quite different from others are cases where an individualjumps to take a more prestigious position elsewhere (see Fee and Hadlock (2003b)).

  • 7/28/2019 SSRN-id464762

    9/35

    7

    disentangle these two possibilities, we will make some attempts when we examine the timing of

    dean departures around rankings releases.

    3. Data and sample selection

    3.1 Sample construction

    We begin by identifying all schools with MBA programs that are ranked by either USN

    or BW from 1990 to 2002.

    3

    We include a school in a given year only if it was assigned a

    numerical ranking in either USN or BW that year. Over time the USN ranking has expanded

    from 25 schools to 50, while the BW ranking has expanded from 20 schools to 30. Thus, our

    later sample period contains more schools and these are the lower tier schools. More specific

    details on how the rankings are constructed are reported in the appendix.

    For each included observation, we identify the school's dean as of the start of the

    academic year (September 1) using data from AACSB membership directories, various

    Hasselback directories of business school faculty, and annual editions ofPeterson's Annual

    Guide to Graduate Studs. We double-check all discrepancies between these sources and add to

    our information set by looking at histories on school websites, online vitas of the deans in

    question, and news searches on theFactiva electronic database (this source includes hundreds of

    local and national newspapers). This process allows us to determine who was in office every

    September 1 and whether there was a change in dean over the subsequent 1-year period.

    3 We begin in 1990, as this is the first time that both magazines published rankings in the same year. USN firstpublished a ranking in 1987 but began regular annual rankings in 1990. BW published its first biannual ranking in1988.

  • 7/28/2019 SSRN-id464762

    10/35

    8

    For each dean in the sample, we collect several pieces of relevant information. An

    important control variable in many turnover studies is age. Unfortunately, however, systematic

    data on age was not available from our sources. As a reasonable alternative, we construct a

    proxy variable by using data from our various directories and news searches to identify the year

    the individual received his/her highest degree (usually a Ph.D.). We then construct an implied

    age variable by assuming an individual receives their doctorate at age 27, master's degree at age

    26, and bachelor's degree at age 21. Since an individual's tenure on the job may affect the

    strength of his/her attachment to the position, we create a tenure variable indicating how many

    years the individual had served as dean as of the start of the observation year. Finally, we record

    whether an individual was hired as an insider or outsider and whether they had an academic or

    non-academic background.

    We report in Table 1 basic summary statistics for our entire sample where each dean-

    school-year from 9/1990 to 9/2002 is treated as a single observation and deans serving on an

    interim basis are excluded. To identify potential time trends or differences by school rank, we

    also report these statistics for selected subsets of observations. As the figures indicate, our

    sample includes 60 schools and 521 total observations. For the sample as a whole, there are

    approximately an equal number of public and private schools, but for schools ranked in the top

    25, almost two thirds are private. The average dean is approximately 53 years old, with slightly

    over 5 years on the job. Deans at top 25 schools in the latter half of the sample exhibit a lower

    average tenure than those in the first half of the sample (4.37 years vs. 5.56 years), suggesting

    that deans may not last as long in office in the more recent time period. Below we will formally

    test for time trends in turnover. Finally, a slight majority of deans are outsiders and a little more

    than three quarters are traditional academics.

  • 7/28/2019 SSRN-id464762

    11/35

    9

    3.2 Characterizing dean departures

    For every dean departure identified by our sample collection procedure, we review all

    articles referring to the departure. In addition, we search news articles and various directories for

    any evidence that the individual resurfaced at a new employer. In all cases we are able to find

    some news article announcing or confirming that an identified departure took place, but there is

    substantial variation in the information content of these articles. To each turnover event, we

    assign a decision date equal to the earliest date that our sources can confirm that the individual

    was leaving or had left. The year in which the decision date falls, measured on a September 1 to

    September 1 basis, is considered the turnover year for that event.

    Some basic summary statistics on dean turnover and career lengths is reported in Panel A

    of Table 2. Since we need information on the identity of the individual serving as dean at both

    the start and end of the year to calculate turnover, our sample of possible turnover school-years

    with non-interim deans drops to 471 from the 521 total school-years reported in Table 1. In this

    sample we find 61 dean departures, implying a turnover frequency of 12.95%. To gauge what

    this means in terms of typical career lengths, we treat each dean-school match as a single

    observation and estimate the distribution of the survival time of a dean's career at a school.4 As

    we report in the table, the estimated 25th percentile of a dean's stint at a school is 4 years, the

    50th percentile is 8 years, and the 75th percentile is 13 years. Interestingly, these figures on

    turnover rates and job tenure are fairly similar to what others have reported for CEOs of large

    corporations in the 1990s.

    4 Survival time percentiles are calculated using the Kaplan-Meier product-limit estimate of the survivor function.

  • 7/28/2019 SSRN-id464762

    12/35

    10

    Using information on the turnover events, we assign each departure to one of five

    mutually exclusive categories, where the death category is self-explanatory. All departures

    where articles mention that the dean was pressured to leave by faculty or administrators are

    assigned to the forced category. Non-forced departures where an individual leaves and takes a

    position outside his school within 6 months of departure are assigned to either the "move up" or

    move down" category depending on the relative quality of the new position. Specifically, we

    assign a departure to the move down category for deans who take dean positions at schools that

    are ranked lower than the school they leave. All other external moves are assigned to the move

    up category. This category is composed entirely of cases where an individual assumes the

    deanship of a higher ranked business school, takes a university president position, or is hired as a

    CEO or CFO of a large corporation. The remaining category is the "generic departure" category.

    These are cases where little or no information is reported or where the dean departs for standard

    generic reasons such as "a desire to return to research and teaching." Typically (73% of the

    time), deans in the generic departure category stay on at their current schools as faculty

    members.

    The figures in Panel B of Table 2 report the fraction of departures in each category.

    Similar to CEO turnover, and not surprising given the delicate nature of these decisions, most

    departures (62.30%) are classified as generic departures. Approximately 10% of dean departures

    fall into the forced category.5

    A fair number of departures (25%) are associated with the dean

    going elsewhere, with moves up being more common than moves down (18.03% vs. 6.56%). In

    Panel C of Table 2 we add figures on the identity of a departed dean's replacement. The

    propensity of schools to hire outsiders and non-academics is consistent with the sample-wide

    5 Weisbach (1988) reports that only a handful of CEOs are overtly forced from office. In a more recent sample, Feeand Hadlock (2003a) find that only 15.6% of CEO departures can be categorized as overtly forced.

  • 7/28/2019 SSRN-id464762

    13/35

    11

    averages reported earlier in Table 1. It appears that it is quite common to initially appoint an

    interim dean after a dean departure, with 36.07% of all turnover events following this pattern.

    3.3 Business school ranking data

    The final necessary component for our empirical analysis is information on rankings

    performance. We construct these measures based on the change in a school's ranking, as changes

    have the potential to provide incremental information that may be used in updating beliefs

    concerning a dean's abilities. Since the USN rankings are annual rankings, we calculate USN

    rankings changes on a 1-year and a 2-year basis. The BW rankings are released every other year,

    so we calculate BW rankings changes only for 2-year windows.

    We report summary statistics on rankings changes in Panel A of Table 3. Both rankings

    appear to exhibit similar levels of variation over time. In particular, over 2-year windows both

    rankings series have a median absolute value change of 2.00, indicating that half of the schools

    move up or down by no more than 2 in a 2-year window. Figures for the 25th percentile and

    75th percentile of the distribution of changes confirm this observation. For both USN and BW

    the mean of the absolute value of rankings changes is larger than the median (means of 3.187 and

    2.822 respectively), indicating that there is some skewness in these distributions.

    Since we are interested in the relative information content of these two performance

    related measures, it is informative to reproduce and extend Dichev's (1999) evidence on the

    correlation structure of these series. As we report in Panel B of Table 3, if we match each BW

    ranking with the USN ranking that is published immediately after it (around 5 months later), we

    find a high correlation (.832). Thus, it appears that the level of a school's ranking in the BW and

  • 7/28/2019 SSRN-id464762

    14/35

    12

    USN listing is highly correlated across series. However, when we consider the change in BW

    and USN rankings over two-year periods, the correlation in changes is quite low (.007) and not

    significant (p-value of .949).6

    Thus, using a significantly larger sample than Dichev (1999), we

    are able to confirm his finding of very little correlation in the updating of these series. Turning

    to the autocorrelation structure, we confirm the second main finding of Dichev (1999) that

    changes in rankings exhibit significantly negative first-order autocorrelation. This indicates that

    some part of a school's rankings change in either poll is predictable. Whether market

    participants filter out this predictability in their use of rankings changes is an open question.

    4. Models of Dean Turnover

    4.1 Empirical strategy

    We now turn to examining the role of rankings in dean turnover. Our main dependent

    variable assumes a value of 1 if a dean leaves over the observation period for any reason except

    death or a move up, and 0 if the dean stays. This dependent variable should pick up most cases

    where the dean's departure is precipitated by dissatisfaction with performance. Following the

    existing literature on management turnover, our empirical approach is to estimate logit models

    predicting this dependent variable as a function of measures related to rankings performance and

    other control variables such as age and tenure. In all models we exclude deans who served for

    6 We deviate from Dichev's (1999) treatment here in that we match each BW ranking with the subsequent USNranking. This seems appropriate as both rankings are based on information collected primarily in the spring andsummer of the previous year based on the most recent graduating class. If we follow Dichev's (1999) approach ofmatching each BW ranking to the average of USN rankings that straddle it, our results are very similar.

  • 7/28/2019 SSRN-id464762

    15/35

    13

    less than one year as of the start of the observation period, as any performance changes are

    unlikely to be attributable to them.

    Some subtle timing issues arise in our construction of performance measures. Because

    the natural way to organize data on dean turnover is based on the academic calendar, our

    dependent variable is coded using a September 1 to September 1 window, where we consider a

    turnover to occur on the event's decision date. Since the USN and BW rankings data are released

    at different times and with different frequencies, we have to tailor our empirical modeling to

    each ranking.

    4.2 The USN rankings

    The USN rankings are annual rankings that are released in March. Thus, one natural

    performance metric, which we call USN lagged performance, is the change in a school's USN

    rank in the March to March period that completely precedes the September to September

    observation year (e.g., rankings changes from March 1996 to March 1997 predicting turnover

    from September 1997 to September 1998). An alternative metric, which we call USN

    contemporaneous performance, is the change in a school's USN rank in the March to March

    period that ends during the September to September observation period (e.g., rankings changes

    from March 1997 to March 1998 predicting turnover from September 1997 to September 1998).

    We have no strong feel as to which performance measurement period is more appropriate. This

    depends on how quickly schools react to rankings changes and whether a school can predict its

    ranking before the actual ranking release date using internal information such as student quality

    and placement. A third performance metric, which is closely related to the other two, is the

  • 7/28/2019 SSRN-id464762

    16/35

    14

    change in a school's USN ranking between the March ranking that falls during the observation

    year and the March ranking two years prior (e.g., rankings changes from March 1996 to March

    1998 predicting turnover from September 1997 to September 1998). We refer to this as the USN

    2-year performance measure.

    We report in columns 1 to 3 of Table 4 estimates from logit models in which each

    column includes one of these USN based performance metrics. Our sign convention is such that

    increases in rankings are coded as positive performance numbers and decreases are coded as

    negative numbers. While the estimates on the performance variables in these three columns have

    the expected negative sign, none are significant at conventional levels. Moreover, the implied

    probabilities derived from these estimates and reported at the bottom of the table suggest a very

    small change in the probability of turnover for large changes in a school's USN ranking. Thus,

    there is little evidence of a simple relation between USN rankings changes and dean departures.

    It is also interesting to note that the year variable is in all cases negative and insignificant, thus

    lending little support to the hypothesis that dean turnover rates have increased over time.

    4.3 The BW rankings

    The BW rankings are bi-annual rankings released in October of every even year (i.e.,

    1990, 1992, and so on). Given the timing of these rankings, our natural window for predicting

    turnover when using these rankings is a 2-year period. There are again some subtle timing issues

    with regards to measuring a program's performance.

    One possibility is to use the change in the BW ranking in the October to October two-

    year period that ends just after the start of the 2-year turnover observation period (e.g., rankings

  • 7/28/2019 SSRN-id464762

    17/35

    15

    changes from October 1996 to October 1998 predicting turnover from September 1998 to

    September 2000). We will refer to this variable as Lagged BW performance. If most turnover

    occurs in response to an actual rankings downgrade and if it takes up to 2 years for deans to

    depart after the downgrade, this measure would be the most appropriate. An alternative

    possibility is to use the change in the BW ranking over the 2-year October to October period that

    approximately overlaps the 2-year turnover observation period (e.g., rankings changes from

    October 1998 to October 2000 predicting turnover from September 1998 to September 2000).

    This performance measure, which we call Contemporaneous BW performance, would not be

    appropriate if deans depart only in response to a rankings change, since the pubic release of the

    ranking change actually comes after the end of the turnover period. However, this measure is

    appropriate if deans depart not in response to an actual rankings change, but rather in response to

    the underlying forces that cause a rankings change or in anticipation of a rankings change. For

    example, if a school knows that it has unhappy students or a deteriorating reputation amongst

    recruiters, the dean may depart either because the school anticipates a rankings drop or because

    the school is evaluating the dean using factors that happen to be reflected in the rankings.

    A compromise variable, which we refer to simply as BW 2-year performance, uses the

    change in ranking between October of year t-1 and October of year t+1 to predict turnover

    between September of year t and September of year t+2 (e.g., rankings changes from October

    1998 to October 2000 predicting turnover from September 1999 to September 2001). This

    measure will match rankings downgrades with turnover events that happen in the period shortly

    (approximately 1 year) before the downgrade and shortly (approximately 1 year) after the

    downgrade. If dean departures are driven by a combination of leaving in response to or in

    anticipation of an actual rankings downgrade, or if they are driven by information that a school

  • 7/28/2019 SSRN-id464762

    18/35

    16

    receives in the window around a rankings change, then this variable will be the most appropriate

    measure. Since it seems reasonable to us that some combination of these possibilities will

    govern dean turnover behavior, we will use this BW 2-year performance measure as our baseline

    BW performance measure.

    In column 4 of Table 4 we present logit estimates using this performance metric. The

    estimated coefficient on the performance variable is negative and significant at the 5% level (t=

    -2.13). The implied probabilities from the estimation indicate that the role of performance in

    turnover is also economically significant. If a dean goes from the 75th percentile of performance

    (an increase in ranking of 2) to the 25th percentile of performance (a drop in ranking of 2), the

    model implies that the probability of his/her departure over the 2-year turnover period increases

    from 15.5% to 31.7%. The magnitudes are more dramatic for larger ranking changes. This

    evidence indicates that the BW ranking itself, or factors that are reflected in the BW ranking, are

    strongly related to the dean turnover decision.

    In results that we omit for brevity, we experiment with using the other possible BW

    performance variables in place of BW 2-year performance in the specification of column 4 of

    Table 4. When we use the Lagged BW performance variable, the coefficient is negative but not

    significant (t=-1.01). When we use the Contemporaneous BW performance variable, the

    coefficient is negative but very small in magnitude and not even close to significant (t=-0.40). A

    reasonable interpretation of this evidence is that most of the dean turnover that is related to

    rankings changes occurs in the 2-year window centered around the release of the information that

    the ranking has changed. This suggests that schools are reacting somewhat to the release of the

    ranking itself, and somewhat to either the anticipation of a ranking drop or a change in school

    characteristics that ultimately are reflected in their ranking.

  • 7/28/2019 SSRN-id464762

    19/35

    17

    We report in column 5 of Table 4 estimates from a regression including both USN and

    BW performance measures. For purposes of comparison, we measure both turnover and

    performance on a 2-year basis. The results here are similar to our earlier findings. The

    coefficient on the BW measure remains negative and significant, while the coefficient on the

    USN ranking measure is small and insignificant (t=.052). The difference between the estimated

    coefficients on these two rankings variables is significant at the 15% level (p-value = .146).

    Taken as a whole, the evidence seems fairly convincing that the BW rankings are related to dean

    departures and the USN rankings are not.

    4.4 Robustness and extensions

    To explore the robustness of our findings, we consider several modifications to our basic

    empirical framework. First, because of concerns about outliers, we experiment with using

    coarser measures of performance that assume a value of 1 when a school's ranking falls over a

    performance measurement period and a value of 0 if the ranking increases or remains constant

    over the period. The results using this type of variable in place of the corresponding

    performance variables in the specifications of Table 4 are very similar to what we report above.

    In particular, we find no evidence that USN rankings changes are related to turnover, while the

    relation between BW rankings changes measured in this up/down manner and turnover is

    significant at the 5% level.

    Since we document earlier that rankings changes are somewhat predictable because of

    first-order autocorrelation, we also experiment with using the unexpected component of rankings

    changes instead of the raw rankings change. Specifically, for each performance variable used in

  • 7/28/2019 SSRN-id464762

    20/35

    18

    Table 4, we regress the variable against its lagged value and use the residuals from the regression

    as our measure of performance. In general, turnover/performance sensitivities tend to be slightly

    stronger using the residual measures. Specifically, in the specification corresponding to column

    4, the coefficient on BW performance is 0.381 (as opposed to 0.232) and is significant at the

    1% confidence level. The coefficient on USN performance remains insignificant in the

    specification corresponding to column 2, but is significant at the 10% (5%) level in the

    specification corresponding to column 1 (3). Thus it appears that turnover does respond to new

    information reflected in rankings, rather than just the predictable component of changes. Our

    finding of greater turnover/performance sensitivity for BW rankings than for USN rankings is

    unaltered by the alternate performance definitions. When we include both USN and BW residual

    performance in a specification analogous to column 5, we again find a significant (1% level)

    coefficient on BW performance and an insignificant coefficient on USN performance.

    As a final alternative performance metric, we consider the possibility that dean turnover

    is related to major gifts, as fundraising is often one of a dean's major responsibilities. Our source

    of information here is the list of business school naming gifts reported by Burch and Nanda

    (2003) as well as AACSB Internationals list of the largest business school donations from a

    single source7. We use this information to create variables measuring total gifts received and

    total non-naming gifts received by a school over the performance measurement period

    corresponding to our rankings variables. When we include each of these variables, in turn, in the

    specifications of Table 4, the estimated coefficients on these variables are in all cases

    insignificant and inclusion of these variables has no appreciable effect on any of the performance

    coefficient estimates. We should caution that the data on gifts is quite sparse and only includes

    7 This list, which we downloaded from http://www.aacsb.edu/members/communities/interestgrps/donors.asp,provides information on 97 gifts ranging in size from $4 million to $62 million.

  • 7/28/2019 SSRN-id464762

    21/35

    19

    large gifts, so it could be that we do not have sufficient power to detect the role of fundraising on

    dean turnover.

    In addition to these robustness checks, we also examine whether there is significant

    variation in the dean-turnover mechanism across schools or types of deans. In particular, we

    experiment with adding dummy variables and dummy variables interacted with the performance

    metric to the specifications of Table 4. The dummy variables we employ here, one at a time, are

    based on (a) the school's status as a private or public school, (b) whether the school was ranked

    in the top 25, (c) whether the dean was an insider or an outsider, and (d) whether the dean was an

    academic or non-academic. In all cases the estimated coefficient on the dummy variable and the

    dummy variable interacted with performance is insignificant. Thus, we find little evidence of

    substantial heterogeneity in the turnover mechanism by school or dean characteristics.

    Finally, we consider some alternative treatments of the dependent variable. First,

    because we are concerned that some dean departures are natural retirements, we re-run the

    regressions in Table 4 for all deans under the age of 63. The results here are qualitatively very

    similar to what report in the table. Second, because we suspect that rankings increases may

    cause deans to move up in the dean labor market, we create a dependent variable that assumes a

    value of 1 for a "move up" departure and a 0 for all other observations. When we use this

    dependent variable in the specifications of Table 4, all of the estimates on the explanatory

    variables are insignificant. It may be that these variables play no role in upward moves, but

    given the small number of these types of job movements, our power here may be quite limited.

    4.5 Rankings components

  • 7/28/2019 SSRN-id464762

    22/35

    20

    Our results to this point indicate that the raw USN ranking is not significantly related to

    dean turnover while the BW ranking appears to be significantly related to dean turnover. To

    further understand what information is used in evaluating deans, we consider the role of the

    major components of the rankings in dean turnover. The BW ranking has two major

    components, a graduate poll and a corporate poll.8

    We experiment with including variables

    based on a school's change in these components in the specification of column 4 of Table 4 in

    place of the overall BW performance variable. The estimates from these models reveal no

    significant difference between these two components when they are both included, and neither

    has a significant coefficient when included individually. Thus, it appears that the BW ranking

    aggregates several pieces of information that are collectively used in evaluating deans.

    The USN poll has consistently included four major components: placement, selectivity,

    corporate reputation, and academic reputation. We experiment with using variables based on a

    school's change in each of these ranking components, included one at a time, in specifications

    (1)-(3) of Table 4 in place of the overall USN rank. In no cases are the coefficients on

    selectivity, corporate reputation, or academic reputation significant at the 5% confidence level

    (the coefficient on corporate reputation was significant at the 10% level in one specification but

    of small magnitude). The one component that is ever significant at the 5% confidence level is

    the change in a school's placement score, and we report results for this variable in Table 5.

    When we include the lagged version of this variable (see column 1), the coefficient is negative

    but not significant at conventional levels (t=-1.28). However, in column 2 the contemporaneous

    version of this variable displays a negative coefficient that is significant at the 10% level

    (t=-1.77). As we report in column 3, the estimated coefficient on the 2-year version of this

    8 In the more recent polls BW has also placed a small weight on a third component, intellectual capital. For moredetails on the rankings, see the appendix.

  • 7/28/2019 SSRN-id464762

    23/35

    21

    placement variable is negative and highly significant (t=-2.40). The estimates in column 3 imply

    that a dean at a school that displays a strong increase in placement success (75th percentile) has

    only a 7.5% chance of departing over the course of the year. This almost doubles to 13.4% for a

    dean at a school with poor performance (25th percentile) on this dimension. Thus, while the

    overall USN ranking is not strongly associated with dean turnover, there does appear to be some

    relevant information in a school's placement success as reported by USN in their rankings.

    4.6 What happens after dean departures?

    The fact that dean departure rates are elevated around poor performance in the BW

    rankings and the USN placement rankings suggests that schools are behaving as if deans can

    actually affect rankings or rankings related measures. If this is the case, we might expect to

    observe large changes in a school's ranking or in other program characteristics after a new dean

    is put in charge.9

    To examine this possibility, we collect data on what happens to a school's

    rankings, enrollment, and tuition in the period after a new permanent dean starts leading a

    school. We calculate these changes over the two-year period starting with the first BW or USN

    rankings release after the new deans appointment.

    We report in Table 6 statistics on median changes in the above measures and, as a

    measure of variance, the median absolute values of these changes. We report these statistics

    separately for schools with a new dean appointed immediately prior to the observation and for

    schools with no new dean appointment. As we report in the table, there is little evidence of an

    9 Along similar lines, Denis and Denis (1995) find evidence of significant changes in firm performance after topmanagement dismissals. See also Bertrand and Schoar (2003) on how the identity of a manager can have asignificant effect on firm characteristics.

  • 7/28/2019 SSRN-id464762

    24/35

    22

    abnormal increase or decrease in rankings in the two-year period following the appointment of a

    new dean. For schools with new deans, the median change in the USN (BW) ranking is 0.25 (

    0.50). These figures do not differ significantly from the median ranking change of 0 in both

    polls for schools without a new dean. The figures in Table 6 also reveal no significant tendency

    towards an abnormal increase or decrease in placement success, enrollment, or tuition following

    the appointment of a new dean.

    When we examine the median absolute value of changes in these variables, we find little

    evidence of large increases in the variance of subsequent program changes following a new dean

    appointment. In particular, only one measure (the median change in the absolute value of a

    school's USN ranking) is significantly elevated around dean replacements, and this difference is

    significant at only the 10% level. Taken as whole, the evidence of major changes in rankings or

    program characteristics around dean changes is weak. However, given the long windows and

    limited data, our power to detect such effects may be low.

    5. Conclusion

    Using a comprehensive sample of business schools with ranked MBA programs from

    1990-2002, we find that departures of deans are related to some rankings related metrics and not

    to others. We find little evidence that dean departures are related to the change in a school's

    overall rank in the U.S. News & World Report(USN) rankings. However, dean turnover does

    appear to be elevated when a school drops in theBusiness Week(BW) rankings and when a

    school deteriorates on the placement dimension as measured by USN. These results are

  • 7/28/2019 SSRN-id464762

    25/35

    23

    significant in both a statistical and economic sense. We find little evidence of systematic

    changes in rankings or program characteristics after a dean is replaced.

    The timing of dean departures around rankings releases indicates that these departures are

    concentrated in the period shortly before and shortly after the release of a rankings downgrade.

    This suggests that schools are reacting both to rankings downgrades themselves and to the

    anticipation of a downgrade. Alternatively, it may be that schools are not responding to rankings

    or anticipated rankings, but rather to information that is reflected in the rankings. In either case,

    it appears that market participants are behaving as if rankings are either informative or important

    when it comes to measuring a dean's success.

    From the perspective of deans, our results indicate that a strong focus on rankings or

    rankings related metrics by these individuals is a rational response to some sort of evaluation

    scheme. Poor performance in the rankings tends to lead to a dean stepping down from his/her

    position for reasons other than taking a more attractive position elsewhere. Some deans probably

    leave voluntarily (perhaps with schools who have fallen in rankings doing little to dissuade

    them), while others are likely to be either gently nudged out or more forcefully pressured to

    leave. Whatever the scenario, our results indicate that deans and schools use ranking related

    information to evaluate the individual's ability and/or the quality of the match between the

    individual and the school. An increase in rankings will reflect well on the dean and a decrease

    will do the opposite.

    Aside from the issue of a dean's incentives, our results are relevant to some other issues

    concerning how business schools function. First, our findings suggest that there is an active

    governance system in business schools where deans are evaluated based on performance.

    Second, our results tell us something about business school objectives. In particular, our results

  • 7/28/2019 SSRN-id464762

    26/35

    24

    suggest that schools care more about their BW ranking than their USN ranking and further

    suggest that schools emphasize placement success. The emphasis put on the BW ranking may

    reflect the fact that many consider this to be the most prestigious ranking. Alternatively,

    emphasis on the BW ranking may indicate that schools care more about qualitative measures of

    student and recruiter perceptions that are weighed heavily by BW. The fact that schools care

    about rankings in general and this type of ranking in particular suggests that deans and schools

    will tend to try to improve on dimensions that will earn them higher BW rankings. Whether this

    is a good or bad thing depends on one's perspective, but certainly catering to the rankings is not

    without costs, as Zimmerman (2001) emphasizes.

  • 7/28/2019 SSRN-id464762

    27/35

    25

    Appendix

    In this appendix we provide a brief description of the construction of the BW and USNrankings and how they have changed over time. We also discuss our construction of theplacement success variable. Before providing these details, we first note that when a group ofschools is tied for a certain rank, we assign all schools in the group a rank equal to the midpointof the rank of the schools immediately above and immediately below the group. Thus, if two

    schools were tied with a rank of 10, we would assign them both a rank of 10.5.The first BW ranking was published in November of 1988 and subsequent rankings havebeen released every even year in the month of October. BW ranked 20 schools from 1988-1994,25 schools from 1996-1998, and 30 schools from 2000-2002. BW also publishes a list of"runner-up" schools. We ignore this list, as it does not rank schools within the runner-upcategory. Before 2000, the BW ranking was based entirely on a corporate poll (poll ofrecruiters) and a graduate poll (poll of recent graduates). The graduate poll rank from 1992onwards is a weighted average of the most recent poll and the two preceding polls. It is not clearexactly how BW weighs the corporate poll rank and the graduate poll rank to construct an overallrank, but it appears that they weigh these approximately equally. Starting in 2000, BW explicitlyweighs the two polls 45% each, with the remaining 10% being based on an intellectual capitalscore that is based on scholarly output.

    The first USN ranking was published in 1987. Regular annual rankings began in 1990and have been published every year since in the months of either March or April. USN ranked25 schools in 1990, 1992, and 1993, 30 schools in 1991, and 50 schools from 1994 onwards. Aschool's USN rank is based on placement, selectivity, reputation by recruiters/executives, andreputation by academics. For a brief period (1990-1994) the USN rank also placed a smallweight (5%) on a school's graduation rate. Each of the four major components of the USN rankhas evolved slightly over time. The placement rank is based primarily on objective statisticsconcerning whether graduates obtain jobs and the salaries they obtain. The selectivity rank isbased on objective statistics such as GMAT scores, undergraduate grades, etc. The recruiterreputation rank is based on a poll of executives or recruiters who hire MBAs. The academic rankis based on surveys of business school deans and MBA program directors. The weightings onthe four major components has changed over time, with the placement rank and selectivity rankalways being weighed 25% or more, and the other two components being weighed 25% or less.

    To construct our measure of USN placement success, we began with each school's USNrank on the placement dimension. From 1999 to 2002 USN did not report the placement rank,but they did report data on the components of the placement rank and the weights they assign toeach of these components in calculating a placement score. Using this data and the reportedweights, we are able to construct a school's USN placement rank within the top 50 for theseyears. Given the way we create these variables, no school could have a rank worse than 50 onthe placement dimension after 1998. In earlier years, some schools with an overall USN rank inthe top 25 or top 50 have USN placement ranks that are lower than 25 or 50. To make thefigures from the earlier and later years comparable, we assign each USN ranked school aplacement score equal to its relative rank on the placement dimension within the set of rankedschools. Thus, if a school with a placement rank of 82 has the 49th worst placement score withinthe top 50 schools, we would assign the school a placement score of 49. Our measures ofplacement performance are then based on the change in a school's placement score over therelevant time window. We set placement performance equal to missing for all measurementperiods that begin in the last year that U.S. News reported placement scores for only 25 schools,as the change in coverage from 25 schools to 50 schools causes placement data from the tworegimes to be non-comparable. Inclusion of these observations has no material effect on ourreported results.

  • 7/28/2019 SSRN-id464762

    28/35

    26

    References

    AACSB Membership Directory, various years, AACSB International, St. Louis.

    Bertrand, Marianne and Antoinette Schoar, 2003, "Managing with Style: The Effect of Managers

    on Firm Policies," Quarterly Journal of Economics 118-4.

    Brickley, James and Jerold Zimmerman, 2001, "Changing Incentives in a MultitaskEnvironment: Evidence from a Top-Tier Business School,"Journal of Corporate Finance 7,367-396.

    Burch, Timothy R. and Vikram Nanda, 2003, "What's in a Name? Hotelling's ValuationPrincipal and Business School Namings,"Journal of Business (forthcoming).

    Business Weeks Guide To The Best Business Schools, various years, McGraw-Hill, New York.

    Coughlan, Anne T. and Ronald M. Schmidt, 1985, "Executive compensation, managementturnover, and firm performance: an empirical investigation,"Journal of Accounting andEconomics 7, 43-66.

    Denis, David and Diane Denis, 1995, "Performance changes following top managementdismissals,"Journal of Finance 50, 1029-57.

    Dichev, Ilia, 1999, "How Good Are Business School Rankings?"Journal of Business 72-2, 201-213.

    Fee, C. Edward and Charles J Hadlock, 2003a, "Management Turnover across the CorporateHierarchy,"Journal of Accounting and Economics (forthcoming).

    Fee, C. Edward and Charles J. Hadlock, 2003b, "Raids, Rewards and Reputations in the Marketfor Managerial Talent," Review of Financial Studies 16-4, 1311-1353.

    Hasselback, James R., various years,Prentice Hall Guide To Finance Faculty, Prentice Hall,New Jersey.

    Jovanovic, Boyan, 1979, "Job matching and the theory of turnover,"Journal of PoliticalEconomy 87-5, 972-990.

    Petersons Annual Guide to Graduate Study, various years, Petersons, New Jersey.

    Tracy, Joseph and Joel Waldfogel, 1997, "The Best Business Schools: A Market-BasedApproach,"Journal of Business 70-1, 1-31.

    U.S. News and World Report:Americas Best Graduate Schools, various years, WashingtonD.C.: U.S. News and World Report.

  • 7/28/2019 SSRN-id464762

    29/35

    27

    Warner, Jerold, Ross L. Watts, and Karen H. Wruck, 1988, "Stock prices and top managementchanges,"Journal of Financial Economics 20, 461-92.

    Weisbach, Michael S., 1988, "Outside directors and CEO turnover," Journal of FinancialEconomics 20, 431-60.

    Zimmerman, Jerold, 2001, "Can American Business Schools Survive?" Working paper,University of Rochester.

  • 7/28/2019 SSRN-id464762

    30/35

    28

    Table 1

    Summary Statistics on Sample of Deans from Schools with Ranked MBA Programs

    All Schools Top 25Schools

    Top 25Schools1990-1995

    Top 25Schools1996-2002

    Number of schools 60 36 32 33

    Number of dean-school matches 121 76 44 54

    Number of dean-school-year observations 521 313 141 172

    Percentage of schools that are public 45.49% 34.19% 34.75% 33.72%

    Percentage of schools that are private 54.51% 65.81% 65.25% 66.28%

    Mean dean age (in years) 52.887 53.195 52.879 53.453

    Mean dean tenure (in years) 5.092 4.907 5.560 4.372

    Percentage of deans who are insiders 38.15% 43.27% 47.52% 39.77%

    Percentage of deans who are outsiders 61.85% 56.73% 52.48% 60.23%

    Percentage of deans who are academics 83.04% 77.24% 77.30% 77.19 %

    Percentage of deans who are non-academics 16.96% 22.76% 22.70% 22.81 %

    Note: The table includes data on all deans who served at business schools with MBA programs ranked by U.S. Newsand World Report(USN) orBusiness Week(BW) in their annual ranking of programs from 1990 to 2002. For eachschool that is ranked in a given year, we report data for the dean who is leading that school in the September of thecalendar year of the ranking. We exclude from the figures all observations where the dean was serving on aninterim basis (totaling 40 dean-school-year observations). Data on deans and schools is collected from a variety ofschool directories/guides and by Factiva news searches. All dean characteristics are calculated over the entire set ofdean-school-year observations. Age is inferred from the year an individual received their highest degree where weassume an individual was 21 when they earned a bachelor's degree, 26 when they earned a master's degree, and 27when they earned a doctorate. Tenure is the number of years the individual had served as dean as of the observationdate. Insiders are all individuals who were employed by the school immediately before becoming dean, with allother deans being considered outsiders. A dean's academic status is determined by examining whether he/she wasemployed by an academic institution immediately prior to becoming dean. A school's Top 25 status is determinedby their USN ranking.

  • 7/28/2019 SSRN-id464762

    31/35

    29

    Table 2

    Turnover Statistics for Business School Deans

    AllSchools

    Top 25Schools

    Top 251990-1995

    Top 251996-2001

    Panel A: Frequency of turnoverNumber of observations 471 287 141 146Number of dean departures 61 42 20 22Annual turnover rate 12.95% 14.63% 14.18% 15.07%Implied length of career - 25th percentile 4 4 4 4Implied length of career - 50th percentile 8 8 8 8Implied length of career - 75th percentile 13 11 11 11

    Panel B: Classification of turnover

    Generic departure 62.30% 64.29% 55.00% 72.73%Forced 9.84% 9.52% 20.00% 0.00%Death 3.28% 0.00% 0.00% 0.00%Moves up 18.03% 21.43% 20.00% 22.73%

    Moves down 6.56% 4.76% 5.00% 4.55%

    Panel C: Background on replacementInside replacement 36.67% 38.10% 30.00% 45.45%Outside replacement 63.33% 61.90% 70.00% 54.55%Academic replacement 83.33% 76.19% 65.00% 86.36%Non-academic replacement 16.67% 23.81% 35.00% 13.64%Initially replaced by interim dean 36.07% 30.95% 35.00% 27.27%

    Note: The entire sample consists of all dean-school years for non-interim deans serving at schools ranked byUSN or BW in that calendar year. The table reports figures on whether a dean departs from a school over theyear measured on a September 1st to September 1st basis. Schools that are ranked 25 or better at the start of agiven year are assigned to the Top 25 group. The 1990-1995 (1996-2001) column reports turnover behavior forobservation-years that begin in calendar 1990 through 1995 (1996 through 2001). The implied career lengthdata is calculated by treating each dean-school match as a single observation and estimating survival times in ahazard model that accounts for right censoring arising from the fact that for many deans we do not observe theircompleted tenure. For each dean departure we collect information fromFactiva news searches and assign thedeparture to one of five mutually exclusive categories. Turnovers were classified as forced if there wasevidence that the dean was pressured to leave because of disputes with university administrators or faculty.Non-forced departures where a dean takes a dean position at a school that is ranked lower than the school theyleave is assigned to the "move down" category. All other external moves are assigned to the "move up"category. This category is composed entirely of cases where an individual assumed the deanship of a higherranked business school, took a university president position, or was hired as a CEO or CFO of a largecorporation. All departures that are not in these categories and were not deaths are assigned to the genericdeparture category. These departures are typically described as motivated by a desire to return to teaching orresearch or to pursue other interests. For cases in which an interim dean is appointed, the figures for replacedby an insider/outsider and academic/non-academic refer to the eventual permanent dean replacement.

  • 7/28/2019 SSRN-id464762

    32/35

    30

    Table 3

    Summary Statistics on Changes in Business School Rankings

    Panel A: Magnitudes of ranking changes Mean/Level Median Number ofObservations

    Absolute value of 1-year change in USN ranking 2.908 2.000 476

    Absolute value of 2-year change in USN ranking 3.187 2.000 41925th percentile of 2-year change in USN ranking -2.00 41975th percentile of 2-year change in USN ranking 2.00 419Absolute value of 2-year change in BW ranking 2.822 2.000 12925th percentile of 2-year change in BW ranking -2.00 12975th percentile of 2-year change in BW ranking 2.00 129

    Panel B: Correlations and autocorrelations in rankings Correlation/Autocorrelation

    p-value Number ofObservations

    Correlation of levels of USN and BW rankings .832 .000 167Correlation of changes in 2-year USN and BW rankings .007 .949 100First-order autocorrelation of changes in 2-year USN ranking -.200 .008 178

    First order autocorrelation of changes in 2-year BW rankings -.380 .000 100Note: All ranking data is collected from the 1990-2002 editions ofU.S. News and World Report(USN) andBusiness Week(BW).The USN rankings are released annually in March/April, while the BW rankings are released bi-annually in October. The 1-yearUSN change is calculated as the change in rank between two successive annual editions. The 2-year USN change is calculated asthe change in rank between the year t edition and the year t+2 edition. The 2-year BW change is the change in rank between twosuccessive biannual BW rankings releases. In correlating the two sources of ranking data, we match the October of year t BWranking for a given school with the March of year t+1 USN ranking of that school. In calculating autocorrelations for the 2-yearUSN data, we only use rankings releases from even years (i.e., 1990, 1992, 1994, etc.) For cases where a group of schools aretied for a certain rank, we assign them all a rank equal to the midpoint of the rank of the schools immediately above andimmediately below the group. Thus, if two schools were tied with a rank of 10, we would assign them both a rank of 10.5.

  • 7/28/2019 SSRN-id464762

    33/35

    31

    Table 4

    Logit Models of Business School Dean Turnover and Business School Rankings (1) (2) (3) (4) (5)

    USN Lagged Performance -0.055(0.043)

    USN Contemporaneous Perf. -0.029(0.039)

    USN 2-year Performance -0.057(0.037)

    -0.006(0.115)

    BW 2-year Performance -0.232**(0.109)

    -0.233**(0.109)

    Dean's tenure 0.036(0.035)

    0.055*(0.033)

    0.050(0.036)

    0.086(0.065)

    0.086(0.066)

    Dean's age 0.079

    **

    (0.038) 0.070

    **

    (0.036) 0.066

    *

    (0.038) -0.006(0.071) -0.006(0.072)

    Year -0.021(0.060)

    -0.006(0.053)

    -0.029(0.061)

    -0.113(0.103)

    -0.112(0.103)

    Constant 35.821(120.16)

    5.608(106.29)

    51.629(121.04)

    223.382(204.81)

    222.676(205.199)

    Number of observations 337 376 325 73 73Pseudo R2 0.051 0.055 0.056 0.119 0.119Log Likelihood -114.57 -126.35 -110.63 -36.88 -36.88

    Implied probability of turnover at varying performance levels US BW10th Percentile 0.128 0.115 0.133 0.541 0.230 0.54125th percentile 0.113 0.108 0.115 0.317 0.226 0.31750th percentile 0.103 0.102 0.104 0.226 0.224 0.22675th percentile 0.093 0.097 0.094 0.155 0.222 0.15590th percentile 0.082 0.091 0.082 0.084 0.219 0.084

    Note: All estimates are logit estimates with asymptotic standard errors reported in parentheses under the coefficientestimates. In specifications (1), (2), and (3) turnover is measured over a 1-year window commencing on September1st of every year. In specifications (4) and (5) turnover is measured over a 2-year window commencing onSeptember 1st of every odd year. In all specifications the dependent variable assumes a value of 1 for allobservations were a dean departs for reasons other than death or "moves up" and 0 when the dean does not turnover.The dependent variable for deaths and moves up is assigned a missing value. The USN Lagged performancemeasure is calculated as the change in the USN ranking in the 1-year period ending in the March issue immediatelyprior to the start of the observation year. The USN Contemporaneous performance measure is the change in the

    USN ranking in the 1-year period ending in the March that falls within the observation year. In column (3) the USN2-year performance measure is the change in the USN ranking in the 2-year period ending in the March that fallswithin the observation year. In columns (4) and (5) where we consider turnover between September of year t andyear t+2, we calculate the BW (USN) performance measure as the change in the school's BW (USN) rank betweenOctober (March) of year t-1 and October (March) of year t+1. Ranking ties are treated as described in Table 3 andall other variables are defined as in the earlier tables. The implied probabilities are calculated holding all othervariables at their means and varying the level of the performance metric used in each respective column. Allestimated models exclude deans who served for less than one year as of the start of the turnover observation period.We measure rankings performance as ranking at the start of the period minus ranking at the end of the period. Thus,improvements in rankings are indicated by a positive level of performance.*Significant at the 10% level, **Significant at the 5% level, *** Significant at the 1% level

  • 7/28/2019 SSRN-id464762

    34/35

    32

    Table 5

    Logit Models of Business School Dean Turnover and Placement Success

    (1) (2) (3)

    Lagged Placement Performance -0.042(0.033)

    Contemp. Placement Performance -0.057*(0.032)

    2-year Placement Performance -0.080**(0.034)

    Dean's tenure 0.055(0.041)

    0.054(0.040)

    0.072

    (0.044)

    Dean's age 0.052

    (0.042)

    0.057

    (0.041)

    0.044

    (0.044)

    Year -0.015(0.060)

    -0.020(0.055)

    -0.036(0.062)

    Constant 25.04(118.83)

    35.07(110.36)

    66.58(123.22)

    Number of observations 245 283 223Pseudo R2 0.050 0.061 0.076Log Likelihood -88.38 -93.79 -75.35

    Implied probability of turnover at varying performance levels

    10th Percentile 0.159 0.156 0.21425th percentile 0.133 0.122 0.13450th percentile 0.115 0.100 0.10175th percentile 0.102 0.085 0.07590th percentile 0.078 0.058 0.056

    Note: All estimates are logit estimates with asymptotic standard errors reported inparentheses under the coefficient estimates. In all specifications turnover is measured over a1-year window commencing on September 1st. The dependent variable assumes a value of 1for all observations were a dean departs for reasons other than death or "moves up". Allplacement variables are based on a USN placement score we create based on the USNplacement ranking and/or placement data. Details are provided in the appendix. The Laggedplacement performance measure is calculated as the change in the placement score in the 1-year period ending in the March immediately prior to the start of the observation year. TheContemporaneous placement performance measure is the change in the placement score in the

    1-year period ending in the March that falls within the observation year. The 2-yearplacement performance measure is the change in the placement score in the 2-year periodending in the March that falls within the observation year. All other variables are defined asin the earlier tables. The implied probabilities are calculated holding all other variables attheir means and varying the level of the performance metric used in each respective column.All estimated models exclude deans who served for less than one year as of the start of theturnover observation period.*Significant at the 10% level, **Significant at the 5% level, *** Significant at the 1% level

  • 7/28/2019 SSRN-id464762

    35/35

    Table 6

    Change in Program Characteristics After Dean Replacements

    New Dean No New Dean Difference

    Median Change in USN Rank 0.25 0.00 0.25 (0.407)

    Median Absolute Value Change in USN Rank 3.25 2.00 1.25 (0.071)

    Median Change in BW Rank -0.50 0.00 -0.50 (0.822)Median Absolute Value Change in BW Rank 2.50 2.00 0.50 (0.430)

    Median Change in USN Placement Score 2.00 -1.00 3.00 (0.236)Median Absolute Value Change in Placement Score 5.50 3.50 2.00 (0.101)Median Change in Enrollment 1.76% 0.93% 0.83% (0.327)Median Absolute Value Change in Enrollment 5.95% 6.29% -0.34% (0.743)Median Change in Tuition 11.06% 12.72% -1.66% (0.193)

    Note: Figures in the New Dean column are median changes over the two-year period starting with thefirst BW (for BW rankings and enrollment) or USN (for USN rankings and tuition) rankings release afterthe appointment of a new permanent dean. Figures in the No New Dean column are median changes

    over two-year periods following an academic year beginning in which no new dean had been appointed.Tuition almost always increases, so an absolute value calculation for tuition changes is not meaningful.Figures in parentheses in the Difference column are p-values from a Wilcoxon rank-sum test.