The Philosophical Gourmet Report and Placement

by Carolyn Dicey Jennings, Pablo Contreras Kallens, and Justin Vlasits

The Academic Placement Data and Analysis project (henceforth, APDA) has yielded the most complete information on placement for PhD graduates in philosophy to date, with a focus on graduates between 2012 and 2016. All the data are publicly available on the main page of the site, http://placementdata.com/, and many graphics have been posted to a companion site, http://philosophydata.org. Prospective graduate students will likely wonder how this information compares to earlier metrics, such as the Philosophical Gourmet Report (henceforth, PGR), which in the past has been used by students to compare PhD programs in philosophy. In this post we look at the 2006-2008 PGR’s overall ratings for graduate programs in philosophy, and compare these ratings to APDA’s placement rates for these programs. We find both weak and strong correlations between 2006-2008 PGR ratings and placement rates for 2012-2016 graduates. In particular, the 2006-2008 PGR ratings correlate strongly with placement into programs rated by the 2011 PGR, but only weakly with placement into permanent positions overall.1 This post will discuss both the strengths and the limitations of the PGR rankings as a guide to placement.

(Link to this post at: https://apda.ghost.io/the-philosophical-gourmet-report-and-placement/)

The PGR has for many years collected ratings of graduate programs from a select group of evaluators. This select group is asked to “evaluate the following programs in terms of faculty quality” using a scale that ranged from 0, “Inadequate for a PhD program,” to 5, “Distinguished” (see the complete instructions here). The mean and median ratings are provided for each program, which are ranked according to the mean ratings (seemingly rounded to tenths and then given equal rank for equal rounded value). In the <a href:"http://www.philosophicalgourmet.com/2008/overall.asp" target="_blank">2006-2008 report the worldwide top 10 programs are ranked as follows:

Rank School Mean
1 New York University 4.8
2 Oxford University 4.7
2 Rutgers University , New Brunswick 4.7
4 Princeton University 4.4
4 University of Michigan , Ann Arbor 4.4
6 University of Pittsburgh 4.3
7 Stanford University 4.1
8 Harvard University 4.0
8 Massachusetts Institute of Technology 4.0
8 University of California , Los Angeles 4.0

Note that the ranking is in fact according to university or “school,” rather than program. University of Pittsburgh, for example, has two philosophy PhD programs, but these are merged for the purpose of the PGR. Thus, when we compare PGR and APDA we use the same university rating for each philosophy program at that university.

Someone who graduated between 2012 and 2016 is likely to have used this report to choose a graduate program in philosophy. They will have read the following first few sentences under What the Rankings Mean (also present in later reports):

The rankings are primarily measures of faculty quality and reputation. Faculty quality and reputation correlates quite well with job placement, but students are well-advised to make inquiries with individual departments for complete information on this score. (Keep in mind, of course, that recent job placement tells you more about past faculty quality, not current.)

As the first systematic review of placement rates, APDA is now in a position to evaluate these claims for the benefit of future graduate students in philosophy.

In its 2017 report, APDA included 135 graduate programs in philosophy, 92 of which were included in the PGR. (Seven programs rated by the PGR were not included in APDA’s report, due to insufficient publicly-available placement information.) The 2006-2008 PGR says the following about non-rated programs on its main page:

All programs with a mean score of 2.2 or higher are ranked, since based on this and past year results, we have reason to think that no program not included in the survey would have ranked ahead of these programs. Other programs evaluated this year are listed unranked afterwards; there may well have been programs not surveyed this year that would have fared as well.

The PGR says that non-rated programs would have a lower rating than ranked programs, which have a mean rating starting at 2.2, but that non-rated programs could have ratings as high as the range of unranked programs—1.6 to 2.1. For this reason, we marked all non-rated (PGR) but included (APDA) programs as having a mean rating of 1 (“Marginal”), which is midway between 0 and 2.

Using the APDA database and the PGR rankings for 2006-2008 and 2011, we generated the following information for each program, using the most recent placement for each graduate:

  1. the percentage of 2012-2016 graduates from that program placed into permanent academic positions (henceforth, % permanent), where “permanent” is defined as tenure-track or equivalent (e.g. a permanent lectureship); 1182 out of 3164 graduates overall (37%)
  2. the percentage of 2012-2016 graduates placed into permanent academic positions at one of 195 known PhD-granting programs (henceforth, % PhD); 339 out of 3164 graduates overall (11%)
  3. the percentage of 2012-2016 graduates placed into permanent academic positions at 2011 PGR rated programs (henceforth, % PGR); 223 out of 3164 graduates overall (7%)
  4. the percentage of 2012-2016 graduates placed into permanent academic positions at 2011 top-rated PGR programs (henceforth, % Top PGR), where “top-rated” is defined as having a mean rating greater than 3 (“Good”), the overall average PGR rating; 99 out of 3164 graduates overall (3%)

The 2011 PGR was chosen for the hiring program to match as closely as possible the prestige of that program at the time of hiring (but note that the 2006-2008 and 2011 PGR ratings are very strongly correlated: .92). Note that the overall number and percentage of graduates goes down significantly from those in permanent academic positions to those in permanent academic positions at top-rated PGR programs. Only 3% of all graduates end up in positions of the latter type.

The overall correlations between the 2006-2008 PGR ratings and these values are as follows:

  1. A weak correlation with % permanent: .31
  2. A strong correlation with % PhD: .67
  3. A strong correlation with % PGR: .66
  4. A moderate correlation with % Top PGR: .57

Thus, the 2006-2008 PGR ratings seem to have the strongest correlations with the narrower placement measures. It seems likely that programs with higher PGR ratings place more students into permanent positions at PhD programs because both measures successfully track prestige. But it also seems likely that the PGR is itself a driver of prestige, such that the publication of these rankings made it more likely that graduates from highly rated programs would find permanent academic positions at PhD programs. In any case, the correlations themselves do not tell us how the PGR and these placement rates are causally related.

We might compare the above to the program ratings from the 2016 and 2017 APDA surveys. These are the mean ratings from past PhD graduates in response to the question: “How likely would you be to recommend the program from which you obtained your PhD to prospective philosophy students?”, from “Definitely would not recommend” (1) to “Definitely would recommend” (5). The correlations between the APDA program ratings and these values are as follows:

  1. A weak correlation with % permanent: .37
  2. A weak correlation with % PhD: .34
  3. A weak correlation with % PGR: .36
  4. A weak correlation with % Top PGR: .33

From this we can see that the program ratings by past graduates have somewhat stronger correlation with permanent placement rates than the 2006-2008 PGR ratings, but weaker correlations with the narrower placement rates. Given that the APDA ratings were provided in 2016-2017, and so cannot be treated as predictors of placement, these correlations may be an indication of how important different types of placement are to graduates, in terms of rating their graduate program.

We might likewise compare the PGR correlations to correlations between placement rates themselves, year to year. To do this, we first chose two three-year graduation ranges: 2006 to 2008 and 2012 to 2014. We chose the first range to match the 2006-2008 PGR, even while noting that APDA's data are very incomplete for this time range and so would not normally be reported (APDA has around half as many graduates for these years as for 2011 and later, with the sample biased toward those in permanent academic positions). Since earlier graduates have had much more time to find permanent academic employment, which would limit our ability to distinguish between programs, we chose to exclude permanent placements that occurred more than three years after graduation. (For this reason, we did not look at most recent placement, as above, but first permanent placement.) So for the second range we chose the most recent three-year period for which at least three years have passed: 2012-2014. The correlation between these 2006-2008 and 2012-2014 placement rates is weak: .27. Yet, it is somewhat stronger than that between the 2006-2008 PGR ratings and these 2012-2014 placement rates: .21.

Given the additional noise in the 2006-2008 dataset, due to its being very incomplete, we suspect that the actual correlation between past and present placement rates is higher than what was found above. We decided to look at correlations for two recent three-year periods, 2011-2013 and 2014-2016, again allowing three years for permanent placement. In this case, the correlations were stronger, but again favored past placement rates, with a moderate correlation between 2011-2013 and 2014-2016 placement rates, .41. This can be compared to a weak correlation between 2006-2008 PGR and 2014-2016 placement rates, .24.

Finally, we compared permanent placement rates with earlier values derived by Carolyn Dicey Jennings in a NewAPPS post, prior to the start of the APDA project. In this post, Jennings generated placement rates using estimates for the number of graduates from each program, as her data were not yet complete. The correlation between these permanent placement rates, which covered graduates between 2012 and 2014, and APDA’s permanent placement rates for graduates between 2014 and 2016 is weak, yet higher than that of the PGR: .37 vs. .24. (Correlation between these NewAPPS placement rates and APDA's placement rates from the more overlapping timeframe of 2011-2013 is strong, as expected: .60.)

Given the above, past permanent placement rates appear so far to be the best predictor of future permanent placement rates. That is, it appears that PGR ratings do not correlate with permanent placement rates as well as past placement information. Whereas the correlations between the 2006-2008 PGR ratings and placement rates for different time ranges were about the same, with a slight decrease for the closer time range (.21 for 2012-2014 vs. .24 for 2014-2016), the correlations between placement rates for different time ranges increased for closer ranges. This could be due to noise in the early ranges, artificially lowering the correlation between those and later ranges, but it could also be due to changes in placement rates over time. This would limit the utility of past placement rates for predicting future placement rates. Yet even the earlier, less complete data correlate at least as well with recent placement as the PGR (.27 vs. .21). The PGR ratings do have moderate to strong correlations with the narrower categories of permanent placement into PhD-granting programs, PGR-rated programs, and top-rated programs. But note that the proportion of total graduates that find such placements is fairly small (11%, 7%, and 3%, respectively).

To go a step beyond correlation, and to assess how well the PGR lines up with different models of placement preference, we constructed three separate sorted lists that we compared with the 2006-2008 PGR ranking. Each sorted list makes use of the above listed placement rates (% permanent, % PhD, % PGR, and % Top PGR) as well as an assumed order of preference. We borrowed the preference-rank translations listed here. Specifically, we constructed three models:

  • The Academic Model (embedded below): a prospective student strongly prefers permanent academic placement, but placement into all other categories is seen as a further bonus. For this model, % permanent is multiplied by 75%, % PhD is multiplied by 17%, % PGR is multiplied by 6%, and % Top PGR is multiplied by 2%. (Since each of these is a subset of the former, the outcome is such that each is treated as a further bonus of decreasing importance.) The programs are then sorted according the highest sum of these values. See the sorted list here.
  • The Research Model: a prospective student strongly prefers placement in a PhD-granting program, but placement into PGR-rated PhD programs would be seen as a bonus, and placement into a top PGR-rated program would be seen as a further bonus. For this model, % PhD is multiplied by 75%, % PGR is multiplied by 17%, % Top PGR is multiplied by 6%, and % Other Permanent (permanent-PhD) is multiplied by 2%. The programs are then sorted according the highest sum of these values. See the sorted list here.
  • The Prestige Model: a prospective student strongly prefers placement in a top PGR-rated program, followed by placement in any PGR-rated program, placement in a PhD-granting program, and then any other permanent placement. For this model, % Top PGR is multiplied by 75%, % Other PGR (PGR-Top PGR) is multiplied by 17%, % Other PhD (PhD-PGR) is multiplied by 6%, and % Other Permanent (permanent-PhD) is multiplied by 2%. The programs are then sorted according the highest sum of these values. See the sorted list here.

The correlations between the PGR and these models are moderate to strong: .64 between the 2006-2008 PGR ratings and expected utilities in the Prestige Model, .68 between the 2006-2008 PGR ratings and expected utilities in the Research Model, and .40 between the 2006-2008 PGR ratings and expected utilities in the Academic Model. Yet, many programs that do well in these models were left out of the 2006-2008 PGR:

  • Twenty-three programs in the top 92 on the Academic Model were left out of the 2006-2008 PGR, listed below with their ranks in parentheses (recall that the 2006-2008 PGR included 99 programs, 92 of which were considered here): University of Cincinnati (6), Baylor University (7), University of Oregon (12), University of Tennessee (15), Pennsylvania State University (21), Villanova University (26), DePaul University (28), Catholic University of America (33), Vanderbilt University (36), University of New Mexico (41), University of Nebraska (45), Fordham University (48), Stony Brook University (54), Duquesne University (60), University at Binghamton (63), University of Georgia (67), University of Oklahoma (75), University of Kansas (76), Tulane University (80), Wayne State University (81), Bowling Green State University (84), Marquette University (87), and University at Buffalo (91).
  • Twenty programs in the top 92 on the Research Model were left out of the 2006-2008 PGR: Pennsylvania State University (31), University at Binghamton (36), University of Nebraska (45), Tulane University (51), University of Oregon (55), Institut Jean Nicod (57), Bowling Green State University (58), Fordham University (61), Baylor University (62), Villanova University (63), University of Kentucky (69), Katholieke Universiteit Leuven (71), DePaul University (76), New School for Social Research (78), Catholic University of America (81), Duquesne University (83), University at Buffalo (84), University of Utah (86), Stony Brook University (88), and Boston College (91).
  • Twenty programs in the top 92 on the Prestige Model were left out of the 2006-2008 PGR: Duquesne University (38), University of Nebraska (44), University of Oregon (45), Baylor University (46), University at Binghamton (47), Pennsylvania State University (55), University of Cincinnati (61), Villanova University (65), DePaul University (70), University of Tennessee (71), Catholic University of America (72), Fordham University (74), Vanderbilt University (78), New School for Social Research (80), University of New Mexico (81), Tulane University (82), Stony Brook University (84), Bowling Green State University (87), Boston College (89), and University of Georgia (92).

We note that many of these programs are “pluralist”—that is, they include continental approaches to philosophy. The PGR has been criticized in the past for failing to adequately represent these areas of philosophy, most famously by Richard Heck:

Partly as a result of the factors just mentioned, the overall rankings in the Report are biased towards certain areas of philosophy at the expense of others. The most famous such bias is that against continental philosophy. I don't much care for that style of philosophy myself, but it isn't transparently obvious why Leiter's oft-expressed and very intense distaste for much of what goes on in certain "continental" departments should be permitted to surface so strongly in the rankings.

While we did not perform a systematic review of these programs, we did look at the correlation between PGR ratings and mention of keywords by past graduates describing those programs in the 2016-2017 APDA surveys. We found that both the 2006-2008 PGR and the 2011 PGR had a moderate positive correlation with the keyword "analytic" (.55 and 58, respectively), but a weak to very weak negative correlation with the keyword "continental" (-.20 and -.14, respectively). It is possible, given the above, that a bias against certain types of philosophy kept the PGR from having a stronger correlation with permanent placement. Further, it is possible that those programs left out of the PGR would perform even better in a model that did not use the PGR as a metric of prestige. We leave further exploration of these possibilities to another post. For now, we simply note this as a further potential limitation for students who wish to use the PGR as a predictor of placement.


1. Correlation coefficient ranges are here described as follows: .00-.19 “very weak”, .20-.39 “weak”, .40-.59 “moderate”, .60-.79 “strong”, .80-1.0 “very strong”. See Evans, J. D. (1996). Straightforward statistics for the behavioral sciences. Pacific Grove, CA: Brooks/Cole Publishing.