Robustness・Climate Science・Modeling

Current/ongoing projects

Dissertation: The Role of Model Intercomparisons in Climate Science

I give a philosophical analysis of model intercomparison practices in climate modeling. Model intercomparisons serve as part of the basis for regular scientific assessments such as the Intergovernmental Panel and Climate Change reports. However, some critics cast doubt on what we can learn from these comparisons, emphasizing, for example, that models can agree for the wrong reasons and that models are simply too complex to understand. I offer a response to such doubt. My dissertation includes:

(1) a justification for the epistemic significance of model agreement—the gist is that when vastly complex models from different modeling institutions agree on some result, this agreement must be understood in conjunction with the fact that the models are all part of the same model family or model type, and that the models themselves, their components, and their outputs all have both empirical and theoretical sources of evidential support;

(2) a descriptive account of scientists’ criteria of diagnosing and analyzing model-model discordance as factual evidence that scientist do gain new insights in these comparison projects and that failure (i.e., model error) can be fruitful;

(3) an account of interactive pluralism, arguing along the lines of an appeal to maximizing our learning (a la Hasok Chang) in scientific research. I examine the senses in which climate models are competing with one another as well as the associated risks and benefits. I end by generalizing to other cases of scientific modeling.

Ultimately, in my dissertation I show how model intercomparison projects are both richly informative and ripe for philosophical analysis, and that skeptical claims that have been about climate models must be thoroughly revised or else given up.

Research Assistant: NSF project on Big Data in Climate science

I worked as a research assistant at the National Center for Atmospheric Research (NCAR) in Boulder, CO, USA on Dr. Elisabeth Lloyd’s NSF grant (# 1754740) on big data in atmospheric science. Our team talked with regional climate modelers and data scientists about the storage, accessibility, assimilation, and ontology of regional climate model data. This model data is at high enough resolution to be useful for both city-planners and concerned citizens who want to know how climate change will impact their local community, making it crucially important that it be understood. As a research assistant, I interviewed scientists, organized research notes, and brainstormed with the team, all of which culminated in their now-forthcoming paper titled “Variety of Data-Centric Science: regional climate modeling.” We're collaborating on a paper about how model translation can serve as a bridge between climate modelers and downstream users of climate data.

The Robustness of "Robustness"

An in progress collaborative work (w/ Ben Kravitz) about different methods for measuring agreement between climate model projections. The key questions are:

(1) To what extent is an assessment of model agreement dependent on the agreement metric?

(2) Is there a "best" metric and, if so, what is it and why is it the best?

Can We Have an Absolute Timescale?

An in progress collaborative work (with Dan Li) exploring the mutability and fixity of temporal data in paleoclimatology.

Robustness in Economic modeling

An under review paper (in collaboration w/ Dan Li) examining the limits of robustness analysis in economics modeling. Draft available on request.

Published work

Philosophers of science have categorized different meanings of robustness under various schemes and for various purposes. They have proposed both general and context-specific accounts. Here, I examine and compare Schupbach’s (2018) explanatory account of robustness analysis (ERA) against Lloyd’s (2015) account of “model robustness” in a discussion of climate model intercomparisons. I show some limitations of ERA by looking at how climate scientists discuss climate model agreement in their published work. Moreover, I suggest that many important elements of Lloyd’s model robustness help clarify the type of reasoning going on in scientific practice. Many of these important characteristics are absent in Schupbach’s account, calling into question his claims that ERA "is descriptively truer to scientific practice” (2018, 297) and is completely general across the sciences. Thus, inferences to robustness in climate model inter-comparison projects often don’t have to do with the elimination of competing hypotheses—contra Schupbach—but do include models which share a model-type and have independent theoretical and empirical support backing them up as described by model robustness. Ultimately, however, I argue that Schupbach’s and Lloyd’s accounts are complementary and that neither one is “truer to scientific practice.”

Paper is available here.

Elsevier may have taken down the paper. If so, email me for a copy:

Based on the disproportionate amount of attention paid by climate scientists to the supposed global warming hiatus, it has recently been argued that contrarian discourse has “seeped into” climate science. While I agree that seepage has occurred, its effects remain unclear. This lack of clarity may give the impression that climate science has been compromised in a way that it hasn't—such a conclusion should be defended against. To do this I argue that the effects of seepage should be analyzed in terms of objectivity. I use seven meanings of objectivity to analyze contrarian discourse's impact on climate science. The resulting account supports the important point that climate science has not been compromised in a way that invalidates the conclusions its scientists have drawn, despite the reality of seepage having occurred.

Paper is available here.

Elsevier may have taken down the paper. If so, email me for a copy: