Causal heterogeneity (2020-present)
Causal analysis is a cornerstone of policy-oriented research. Successful policy requires reliable (i) identification and (ii) estimation of causal effects that are also useful predictors of possible interventions. An obstacle to finding suitable estimates is the causal heterogeneity of the population one wishes to intervene on. Popular causal inference methods often treat the data generating mechanism as a “black box”, because data on individual units are unavailable, or too costly to collect, or hard to analyse. As a result, the estimates produced by such methods may end up aggregating heterogenous causal effects, namely effects that differ widely from one subpopulation to another. And without knowledge of the different ways in which the cause influences the effect in the target population, policies based on such estimates run the risk of being unsuccessful.
As concerns (i), the project will show that heterogeneity is a severe threat to causal inference methods based on experimental or quasi-experimental protocols (RCTs, instrumental variables), as codified by Woodward (2003,2021). When the aggregates' components are not observed, confounding cannot be ruled out. How can causal inference in those cases be rationalized? The project will attempt to rationalize the inference based on abductive considerations (stability of results across studies, no falsification of target hypothesis).
As concerns (ii), the project will exploit the ``independent component (IC) representation'' (Casini, Moneta, Capasso 2021) to uncover and estimate causal heterogeneity. Although the IC representation gives no explicit information on how the endogenous variables are related to each other, it has the advantage over a traditional structural model that one can gain useful information on hidden sources of variation, which allow one to identify “ill-defined variables” (Spirtes and Scheines, 2004) and improve causal inference (e.g., identify “ambiguous manipulations”). The project will apply this framework to reducing the ambiguity on how the cause transmits its effect, by separating different effects in different subpopulations. In particular, it will exploit variables, which are differently affected by the cause (e.g., side-effects) to identify subgroups that respond differently to the cause and recalculate the effect size in those subgroups. This strategy promises to aid the selection of suitable targets of interventions and to reduce the uncertainty about policy effects.
As concerns (i), the project will show that heterogeneity is a severe threat to causal inference methods based on experimental or quasi-experimental protocols (RCTs, instrumental variables), as codified by Woodward (2003,2021). When the aggregates' components are not observed, confounding cannot be ruled out. How can causal inference in those cases be rationalized? The project will attempt to rationalize the inference based on abductive considerations (stability of results across studies, no falsification of target hypothesis).
As concerns (ii), the project will exploit the ``independent component (IC) representation'' (Casini, Moneta, Capasso 2021) to uncover and estimate causal heterogeneity. Although the IC representation gives no explicit information on how the endogenous variables are related to each other, it has the advantage over a traditional structural model that one can gain useful information on hidden sources of variation, which allow one to identify “ill-defined variables” (Spirtes and Scheines, 2004) and improve causal inference (e.g., identify “ambiguous manipulations”). The project will apply this framework to reducing the ambiguity on how the cause transmits its effect, by separating different effects in different subpopulations. In particular, it will exploit variables, which are differently affected by the cause (e.g., side-effects) to identify subgroups that respond differently to the cause and recalculate the effect size in those subgroups. This strategy promises to aid the selection of suitable targets of interventions and to reduce the uncertainty about policy effects.
Mechanistic constitution (2017-2020)
Differently from the notion of material constitution (the relation between an entity and its parts), which has been discussed by philosophers for centuries, the notion of mechanistic constitution has only recently entered the philosophical scene. Mechanistic constitution is a non-causal relation between a system's higher-level phenomenon (e.g., a mouse finding her nest) and the lower-level behaviours of some of the system's parts (e.g., NMDA activation at the synapses of the mouse’s neurons). What remains unclear is how mechanistic constitution is to be understood, especially in the light of the sustained criticism recently received by the most prominent theory of constitution in the literature, namely Craver's (2007) "mutual manipulability" theory.
The first goal of this project is to provide operational definitions, which allow one to better understand how evidence bears on the correct application of the concept of mechanistic constitution. In this respect, this inquiry is different from the one on material constitution, which was not concerned with evidence-based reasoning, and similar to recent work on causation.
Secondly, the notion of mechanistic constitution is central to mechanistic explanations, which are ubiquitous in the special sciences. In a mechanistic explanation, a mechanism’s parts and behaviors explain the phenomenon in virtue of constituting that phenomenon. Since constitution is a necessary component of a mechanistic explanation, understanding the former is necessary for understanding the latter. In particular, interpreting scientific cases of mechanistic explanation as instances of (genuine) explanation depends on the availability of sound criteria for constitution.
Finally, understanding constitution is also necessary for systematizing our study of constitutional dependencies by way of methodologies for constitutional discovery. Methodology crucially depends on conceptual clarity. In the same way conceptual clarity on causation has been pivotal in advancing empirical research, conceptual clarity on constitution, too, promises to advance a kind of empirical investigation, which is still not well understood but important to many sciences.
I will address these issues by (1) exploring theories of constitution alternative to Craver's mutual manipulability theory (see "An Abductive Theory of Constitution" and "Horizontal Surgicality and Mechanistic Constitution"); (2) applying the proposed theories to cases of mechanistic explanation; (3) operationalizing the proposed analysis into protocols for constitutional discovery (based on the framework developed in "Variable Definition and Independent Components").
The first goal of this project is to provide operational definitions, which allow one to better understand how evidence bears on the correct application of the concept of mechanistic constitution. In this respect, this inquiry is different from the one on material constitution, which was not concerned with evidence-based reasoning, and similar to recent work on causation.
Secondly, the notion of mechanistic constitution is central to mechanistic explanations, which are ubiquitous in the special sciences. In a mechanistic explanation, a mechanism’s parts and behaviors explain the phenomenon in virtue of constituting that phenomenon. Since constitution is a necessary component of a mechanistic explanation, understanding the former is necessary for understanding the latter. In particular, interpreting scientific cases of mechanistic explanation as instances of (genuine) explanation depends on the availability of sound criteria for constitution.
Finally, understanding constitution is also necessary for systematizing our study of constitutional dependencies by way of methodologies for constitutional discovery. Methodology crucially depends on conceptual clarity. In the same way conceptual clarity on causation has been pivotal in advancing empirical research, conceptual clarity on constitution, too, promises to advance a kind of empirical investigation, which is still not well understood but important to many sciences.
I will address these issues by (1) exploring theories of constitution alternative to Craver's mutual manipulability theory (see "An Abductive Theory of Constitution" and "Horizontal Surgicality and Mechanistic Constitution"); (2) applying the proposed theories to cases of mechanistic explanation; (3) operationalizing the proposed analysis into protocols for constitutional discovery (based on the framework developed in "Variable Definition and Independent Components").
Model-based reasoning (2016-2019)
How do scientists reason with scientific models? In particular, how, if at all, do they use models to explore what is possible, or likely, or necessary? This project is motivated by the puzzling observation that so-called "minimal" models are often used to understand empirical phenomena, which would be too hard to investigate (solely) based on observations obtained in natural or controlled environments. What justifies this form of scientific reasoning?
Here is an example, which I discussed here. Bubbles and crashes in the economy are recurrent phenomena for which mainstream neoclassical macroeconomics, which assumes that markets are composed of many identical agents, has provided so far no convincing explanation. By contrast, highly idealized minimal models, which give up the homogeneity assumption, succeed at reproducing bubbles and crashes. Moreover, their proponents argue that the models explain bubbles and crashes (based on the agents' heterogeneity). Several issues arise, of which I will mention two.
First, it seems that some scientists are implicitly committed to the view that model exploration provides support for scientific hypotheses (e.g., the agents' heterogeneity generates bubbles and crashes), even without direct evidence on the working of the system to be explained (e.g., the mechanism that governs the agent's decision making in the market). But how can model exploration - by itself - confirm, given the unrealistic and most likely false assumptions that enter these models? This objection has been voiced by, among others, Sugden and, more recently, Odenbaugh and Alexandrova. To address this question, I aim to rationalize the confirmatory role of model explorations in a Bayesian framework (see "Confirmation by Robustness Analysis").
Second, Batterman and Rice have drawn on Batterman's renormalization group account of explanation in physics to reconstruct minimal model explanations across the sciences. They claim that minimal models explain in virtue of showing the universality of their explananda (basically, their robustness across perturbations). By drawing on a variety of examples, I argue (see "How Minimal Models Explain") that outside physics this proposal faces a dilemma: either it is inapplicable, if stricly construed; or it is uninformative, if broadly construed. To deal with cases that sits uncomfortably under Batterman and Rice's umbrella, I propose an alternative account: minimal models explain in virtue of the robustness of the models' results across modelling assumptions. The explanation consists in showing the relevance and the salience (but not the universality) of the explanans with respect to the explanandum.
Here is an example, which I discussed here. Bubbles and crashes in the economy are recurrent phenomena for which mainstream neoclassical macroeconomics, which assumes that markets are composed of many identical agents, has provided so far no convincing explanation. By contrast, highly idealized minimal models, which give up the homogeneity assumption, succeed at reproducing bubbles and crashes. Moreover, their proponents argue that the models explain bubbles and crashes (based on the agents' heterogeneity). Several issues arise, of which I will mention two.
First, it seems that some scientists are implicitly committed to the view that model exploration provides support for scientific hypotheses (e.g., the agents' heterogeneity generates bubbles and crashes), even without direct evidence on the working of the system to be explained (e.g., the mechanism that governs the agent's decision making in the market). But how can model exploration - by itself - confirm, given the unrealistic and most likely false assumptions that enter these models? This objection has been voiced by, among others, Sugden and, more recently, Odenbaugh and Alexandrova. To address this question, I aim to rationalize the confirmatory role of model explorations in a Bayesian framework (see "Confirmation by Robustness Analysis").
Second, Batterman and Rice have drawn on Batterman's renormalization group account of explanation in physics to reconstruct minimal model explanations across the sciences. They claim that minimal models explain in virtue of showing the universality of their explananda (basically, their robustness across perturbations). By drawing on a variety of examples, I argue (see "How Minimal Models Explain") that outside physics this proposal faces a dilemma: either it is inapplicable, if stricly construed; or it is uninformative, if broadly construed. To deal with cases that sits uncomfortably under Batterman and Rice's umbrella, I propose an alternative account: minimal models explain in virtue of the robustness of the models' results across modelling assumptions. The explanation consists in showing the relevance and the salience (but not the universality) of the explanans with respect to the explanandum.