As senior scientists, we have navigated the challenging waters of the PhD qualifying exam—both as students taking it and as professors administering it. As students, both of us excelled academically, yet we anticipated the qualifying exam with anxiety and dread. How could we not? The professors judging us could inquire about any aspect on which they were expert. We prepared for the exam by revisiting our coursework, aware that even the most thorough review might not suffice, as some professors saw the exam as an opportunity to push students beyond core knowledge.

We recall preparing diligently for specific topics about which we were never queried, and thus we were unable to showcase our extensive preparation. We recall knowing the answers to some exam questions in retrospect, but in the pressure of the moment, we couldn’t remember them. What’s more, passing the qualifying exam left us no closer to defining our thesis research direction.

fizkes/shutterstock.com

Later we discovered that professors also approach the qualifying exam with anxiety and dread. The consequences of the exam put immense pressure on professors to craft questions that can accurately gauge a student’s potential. The exam’s duration—mere hours or days—seems inadequate for making such a significant judgment on a student’s future. Students commonly stumble over questions, compelling us to look past their mistakes to infer their potential. Such a process depends heavily on subjective judgment. And the fact that those judgments usually rest with a few faculty members raises concerns about fairness and the inclusion of diverse viewpoints. Even more troubling, insights gained from the exam are often minimal; we could usually predict a student’s outcome from their past coursework performance.

When questioning the rationale behind the qualifying process, we find that the arguments for retaining the traditional qualifying exam’s written and oral components do not hold up to scrutiny. One commonly stated goal is to assess the student’s grasp of the core knowledge attained from one to two years of coursework. We wondered whether failure to pass the qualifying exam reveals more about the inadequacies of the courses than those of the student. If the true goal is to ensure mastery of core knowledge, a simple way to ensure it is to make that mastery the standard for passing the courses.

Another common argument is that the exam tests the student’s ability to synthesize concepts across disciplines. Although synthesis is a valuable skill, so are deep dives—Nobel Prizes, for instance, are often awarded to scientists who relentlessly pursue a narrow scientific question.

Yet another argument is that the exam assesses creativity. A student’s creative strengths, however, may lie in emerging fields, such as artificial intelligence, that are not covered by the traditional written and oral components and that may fall beyond the expertise of the examiners.

There also are compelling arguments against the traditional format. The qualifying exam is the most stressful milestone in a graduate degree. Not all students perform well under stress, regardless of their capacity for deep thought. Moreover, the traditional exam embodies a contradiction: It is contrived and fails to mirror the realities of actual research. A successful scientific career relies on conducting actual research, not on taking exams. Judging a researcher’s potential by testing them on tasks unrelated to their future work is inherently flawed.

Despite those shortcomings, there remains a compelling necessity for qualifying exams: Experience shows that some students, despite passing their courses, struggle to complete a dissertation within the typical five-year doctoral program. Identifying those students early allows all parties to move forward without investing years of effort into a PhD journey that may ultimately be unsuccessful.

Recognizing the limitations of the traditional qualifying exam, we convinced our colleagues in the department of atmospheric, oceanic, and Earth sciences at George Mason University to scrap the traditional format five years ago and institute a new process that was designed to overcome its shortcomings.

Figure 1 provides a visual summary of the new process and its place within the overall PhD track. It centers on a semester-long course typically taken in the spring of a student’s second year. In the course, the student works with their adviser to formulate a project that will lead to a research paper.

Figure 1.

The qualifying exam, revisited. An incoming PhD student in the atmospheric, oceanic, and Earth sciences department at George Mason University completes a series of milestones (blue) in the semesters before the qualifying course. The milestones of the qualifying course are listed in the central box. The successful student goes on to submit a manuscript to a journal and form a dissertation committee (green). The unsuccessful student may take the course again or exit to the master’s degree track (red).

Figure 1.

The qualifying exam, revisited. An incoming PhD student in the atmospheric, oceanic, and Earth sciences department at George Mason University completes a series of milestones (blue) in the semesters before the qualifying course. The milestones of the qualifying course are listed in the central box. The successful student goes on to submit a manuscript to a journal and form a dissertation committee (green). The unsuccessful student may take the course again or exit to the master’s degree track (red).

Close modal

The student presents their paper idea in two meetings. In the first, the student delivers their proposal during the first 15 minutes, followed by a 75-minute period during which panel members pose critical questions about it, as shown in figure 2. Half the department’s faculty members are present in that meeting. About four weeks later, the student presents to the other half of the faculty. The two faculty panels function independently without communicating with the other. The autonomy provides the student with two independent opportunities to present their work at their best, free from biases influenced by the previous performance.

Figure 2.

A student in the new PhD qualifying process presents a proposal for a paper to half the department's faculty. About four weeks later, the student gives a revised presentation to the other half of the faculty. The two faculty groups function independently without communicating with the other.

Figure 2.

A student in the new PhD qualifying process presents a proposal for a paper to half the department's faculty. About four weeks later, the student gives a revised presentation to the other half of the faculty. The two faculty groups function independently without communicating with the other.

Close modal

After each meeting, the student receives written feedback from each panel member on various aspects of their paper idea. That feedback includes an assessment of the student’s grasp of the relevant literature, their physical understanding of the scientific problem, their ability to perform quantitative analysis, and their effectiveness as a communicator. Panel members evaluate each of the categories, but they don’t assign a grade. The feedback is intended to be constructive—to help the student identify areas in which they need improvement.

By the end of the semester, the student submits either a manuscript that is nearly ready for submission to a peer-reviewed journal or, if the research is not yet completed, a proposal for a scientific paper that incorporates their original research. All panel members read the student’s document. On the final day of the course, the student gives a longer oral presentation to the entire faculty. The faculty members then discuss and make recommendations to the program director.

The new qualifying process has several advantages over the traditional format. First, instead of assessing a student’s knowledge, faculty members evaluate the student’s ability to perform the activities critical to scientific inquiry: identifying a scientific problem, devising solutions, and engaging in discourse. Second, the process spans an entire semester, so decisions on a student’s performance are not based on a singular moment. Third, each student chooses their own research topic, affording them the opportunity to showcase their creativity.

Furthermore, a student receives questions tailored to their chosen topic. That focus avoids the pitfall of evaluating them on their response to questions far removed from their prepared area. Although confronting a student with unanticipated questions can have merits, using that approach as a sole determinant of the student’s future is risky. A semester-long engagement provides a more reliable assessment of potential.

The new qualifying process also offers each student multiple opportunities to succeed. Because the two faculty panels are independent, a student who struggles in the first meeting can present a revised version of their work to a fresh audience. Some students might face special challenges in the oral presentation, such as stage fright or language deficiencies. To address that issue, the new process includes a written submission as an additional means to demonstrate the student’s abilities.

Grading in the qualifying process does not rely on averaging individual scores. The primary goal is to identify evidence of the student’s research capabilities. The evidence may not be uniformly apparent across all components, but outstanding performance in a single aspect can eclipse weaker performance in other areas. Additionally, we consider the student’s progress throughout the semester because improvement is often a key indicator of potential success in research.

Another advantage is that a student has ample opportunity to revise their work. Even the best student may not fully explore their ideas initially. Our semester-long process mimics the peer-review process and typically exposes any serious shortcoming that may exist in a student’s proposal. Observing how students adapt to constructive feedback often provides more insight into their potential than does reviewing their initial proposals.

Unavoidably, subjective judgments affect the final decision, and they may be influenced by biases tied to race, gender, sexual orientation, or disability. Even the traditional format, with its fact-based questions, involves subjective judgments in deciding the acceptability of a student’s response. One way to counter biases is to involve a diverse panel of judges whenever subjective judgments come into play. In the traditional format, the decision often rests with a select few faculty members. In the new format, the entire faculty openly participate in the decision-making process, which brings a wider range of perspectives into the discussion.

By distributing responsibility across all faculty members, the new process also lightens the burden on individual advisers, who often hesitate to single out their own struggling students. When a student is redirected, their adviser usually appreciates the collective intervention.

Although the new format requires a greater investment of time from faculty, productive scientists are accustomed to allocating time for conferences and peer-review duties. And the new qualifying process calls for minimal preparation by faculty, with only modest tasks required post-meeting, such as filling out evaluation forms. When it comes to peer-review services, the question is how to best manage one’s time reviewing others’ work. Allocating a portion of that time to assisting students in one’s own department proves to be a sound investment in upholding the quality and integrity of the qualifying process. Ultimately, the efforts produce better student outcomes, which, in turn, cast a positive light on the faculty and the department.

Fellow scientists who hear about our qualifying process are often doubtful about its feasibility in their own departments. They cite factors such as a large student population. We are confident, however, that the new process can be tailored to any department. Our PhD program at George Mason has a dozen faculty members and admits three to six candidates per year. For larger departments, splitting students and faculty into smaller cohorts operating in parallel is a feasible solution.

Another concern has been the perceived inefficiency of involving faculty who lack expertise in a student’s chosen topic. But we have found the opposite to be true: Observing how the student articulates their research to nonspecialists, who nonetheless possess broad scientific knowledge, has several advantages. Incorporating diverse expertise in faculty panels, for instance, ensures that a mix of technical and foundational questions will be addressed, which makes the evaluation more thorough.

The new process also encourages faculty to engage with each student out of genuine interest, thus fostering a less adversarial interaction than the traditional approach. The reversal of the conventional roles of teacher and student mirrors what a student will encounter in advanced doctoral research. Furthermore, the shift in dynamic creates opportunities for a student to demonstrate creativity in handling conflicting criticisms that arise from reviewers with different knowledge backgrounds.

One issue that has generated considerable debate among our faculty is the grading policy. Currently, a student who passes the qualifying course receives either an A or a B. The A grade, however, is reserved for students who submit a manuscript that the faculty believes can be refined into a publishable paper after a few months of revision. That’s a high standard, and not all exceptional students meet it.

We believe that a significant distinction exists between a student who develops a nearly publishable paper in their second year and one who does not, and the grade assigned to each one is intended to reflect and reward that difference. Moreover, the standard is attainable: One or more students achieve it each year.

Most second-year graduate students find the prospect of formulating and defending a publishable scientific analysis in a single semester daunting. Indeed, many students have never presented their own research in front of a group of scientists. To address the issue, we have implemented support mechanisms to assist each student throughout the qualifying process.

First, the student works with their adviser to formulate an idea that will be integrated into their dissertation. If the student is supported by a grant, they are encouraged to select a topic related to that grant, but their contributions must be independent. The new format provides opportunities for the student to innovate while still benefiting from their adviser’s guidance.

Advisers must avoid overdoing their guidance; otherwise, the process becomes an evaluation of the adviser instead of the student. Our tenet is that the process should not disrupt the natural interaction between student and adviser. Reasonable guidance includes suggesting research topics, offering feedback on presentations and written materials, assisting in problem diagnosis, and helping the student devise strategies for solutions.

Beyond that, it is left to the student to use the information they receive. The adviser should avoid writing code on behalf of the student or producing text that could be copied into the student’s written submission. The student is expected to defend their ideas without assistance from their adviser. In addition, we advise each student to reduce their course load or take a reading course during the qualifying process to allow more time for conducting independent research.

The two of us currently lead the qualifying course, guiding students through the process. We listen to practice talks prior to panel meetings and offer guidance on delivering effective presentations. Students consistently underestimate the level of detail necessary to communicate their research plan effectively, and some are unwittingly too dependent on their advisers to address basic questions related to their project. We strive to inspire students to take ownership of their work and to thoroughly understand the models and data that they use. Practice talks can expose potential research flaws early enough for students to make adjustments before their first panel meeting. For a point-by-point comparison of the traditional exam’s shortcomings and the advantages of the new process, see figure 3.

Figure 3.

Comparing qualifying processes. The traditional PhD qualifying exam assesses a student's knowledge through written or oral exams that last a few hours or days. The new PhD qualifying process evaluates a student's progress toward a publishable research paper over a 16-week semester.

Figure 3.

Comparing qualifying processes. The traditional PhD qualifying exam assesses a student's knowledge through written or oral exams that last a few hours or days. The new PhD qualifying process evaluates a student's progress toward a publishable research paper over a 16-week semester.

Close modal

We also meet with each student after their panel meeting to discuss written feedback from the faculty. That feedback resembles the kind that might be encountered during a genuine peer-review process, encompassing not only constructive feedback but also potential contradictions. As educators, we believe that it is crucial not to shield students from that reality. Instead, we strive to expose students to diverse perspectives and help them interpret the resulting feedback constructively.

Sixteen weeks is hardly enough time to complete a serious research project. Accordingly, we encourage every PhD student to begin preparing for the qualifying process as soon as they enter our program. Any research conducted by a student during their time in the graduate program is permitted for use in the qualifying process. And the qualifying process is explained in detail to PhD students at the end of their first spring semester. The early introduction helps instill a productive mindset in each student as they approach their first summer in the program—a period devoid of course distractions that allows them to focus wholeheartedly on their research goals.

As advisers, we try to strike a balance between providing assistance and giving students space to develop their own thinking. To do that effectively, we’ve adopted a strategy inspired by what’s known as the Heilmeier catechism. George Heilmeier, who led the Defense Advanced Research Projects Agency in the mid 1970s, crafted a series of questions that every good proposal should answer.1 The list distills years of wisdom into a concise question set. We have adapted it to create the following questions that every research project should address:

  • What are you trying to do? State your objectives without jargon.

  • Who should care? If you are successful, what difference will it make?

  • What research has been done about the topic in the past?

  • What is the precise gap that you are trying to fill?

  • What is new in your approach?

  • Why do you think your approach will be successful, and how will you measure success?

The simplicity of the questions can deceive students into underestimating the effort needed to answer them effectively. To ensure the development of thorough answers, students write an abstract for their projects early in the semester. Those abstracts are then shared with the class, sparking discussions about best practices and common pitfalls when communicating scientific ideas. Invariably, some abstracts fail to address one or more key questions. Some students are convinced that they have responded adequately to a question until further discussion reveals gaps in their explanation. The value of carefully addressing the questions often hits home during the discussions.

Many advanced students begin the semester with a well-defined paper plan but are surprised by the challenge of communicating their plan to others. The situation reflects a fundamental reality: Success as a scientist relies on communication as well as critical thinking. Proficient communication skills are vital for today’s PhD graduates, whether it’s for securing funds, responding to peer review, teaching, or mentoring. Although research and communication skills are often regarded as distinct, we find much truth in the adage that poor communication may reflect poor thinking.

The final decision of whether a student passes the qualifying process is a collective one. Although faculty opinions may vary initially, discussions usually lead to a consensus. Students who do not pass typically display one or more common traits: an inability to demonstrate quantitative analysis; a failure to understand the relevant literature; insufficient familiarity with the data or model chosen for study; an inability to answer questions related to calculations, models, or assumptions; an inability to articulate how the proposed research addresses key questions; difficulty in communicating the research plan; and an inability to understand or convey the relevant concepts.

A student at risk of failing is typically alerted after the panel meetings. That early notification gives them time to make improvements and address the concerns. Consequently, a negative outcome is rarely a surprise to the student. Some students have voluntarily withdrawn from the qualifying process during the semester after recognizing that they were unlikely to meet the necessary requirements. The self-selection process allows students to make informed choices about their academic path and potentially explore alternative options that better align with their capabilities and interests.

A student who fails the qualifying process has the opportunity to reframe their work into a master’s thesis and complete that degree instead. Some students have retaken the PhD qualifying process and progressed to candidacy, having benefited from the early identification of areas for improvement.

A fortunate byproduct of the new qualifying process is that it energizes the student for their dissertation research. Throughout the semester, the student has multiple opportunities to present and refine their research ideas. When the student successfully passes the qualifying process, they do so with confidence in their ideas and typically reduce the time required to complete their first paper. Compare that with the traditional qualifying exam, which a student might pass without gaining any clearer direction for their research. Another byproduct is that faculty members learn of the student’s research topic early and may develop productive dialogues with them. Likewise, a student becomes familiar with faculty interests early on. That helps them identify suitable candidates for members of their dissertation committee.

Each year we watch students rise to the challenge of crafting original ideas. Witnessing students’ growth and maturation is inspirational. Indeed, the faculty also learn from the process. Many of us feel that our skills as research advisers improve as a result of it. The new format elevates the qualifying process from a routine student assessment to a shared journey of scientific discovery.

We thank our fellow faculty members for their patience and for helping to refine the qualifying process, testing it (with two of our own students), and ultimately integrating it into the department’s curriculum. We also thank faculty and students for their feedback on the manuscript.

1.
G. H.
Heilmeier
, “
Some reflections on innovation and invention
,”
Bridge
, Winter
1992
, p.
12
.

Tim DelSole (tdelsole@gmu.edu) and Paul Dirmeyer are professors of climate dynamics at George Mason University and senior research scientists at its Center for Ocean-Land-Atmosphere Studies in Fairfax, Virginia. Together they have overseen the PhD qualifying exam in their department for more than two decades.