In some way or other, all of my work is about how the formal and the normative make contact.  Much of my previous work is centered on Bayesian norms for updating on uncertain evidence.  Two of my papers develop the idea that we ought to take diachronic coherence to be a relation that is defined over sets of probability shifts.  I argue that doing this yields a framework that is more parsimonious than the orthodox Bayesian model.  I also argue that my model can accommodate certain attractive externalist intuitions. 


I’ve also written on permissivism and time-slice epistemology from a metaepistemological perspective.  I argue that adopting a pluralistic approach to the question of what makes formal epistemology ``formal'' leads to interesting defenses of these views.


My recent work is more directly focused on the normativity of logic.


Below are some abstracts and drafts of some published and unpublished work.


Bayesian Coherentism, Synthese, forthcoming [penultimate draft]

This paper considers a problem for Bayesian epistemology and goes on to propose a solution to it.  On the traditional Bayesian framework, an agent updates her beliefs by Bayesian conditioning, a rule that tells her how to revise her beliefs whenever she gets evidence that she holds with certainty.  In order to extend the framework to a wider range of cases, Richard Jeffrey (1965) proposed a more liberal version of this rule that has Bayesian conditioning as a special case.  Jeffrey conditioning is a rule that tells the agent how to revise her beliefs whenever she gets evidence that she holds with any degree of confidence.  The problem?  While Bayesian conditioning has a foundationalist structure, this foundationalism disappears once we move to Jeffrey conditioning. If Bayesian conditioning is a special case of Jeffrey conditioning then they should have the same normative structure.  The solution? To reinterpret Bayesian updating as a form of diachronic coherentism.

Commutativity, Normativity and Holism: Lange Revisited, Canadian Journal of Philosophy 50 (2): 159-173. 2020. [link]


Lange (2000) famously argues that although Jeffrey Conditionalization is non-commutative over evidence, it's not defective in virtue of this feature.   Since reversing the order of the evidence in a sequence of updates that don't commute does not reverse the order of the experiences that underwrite these revisions, the conditions required to generate commutativity failure at the level of experience will fail to hold in cases where we get commutativity failure at the level of evidence.  If our interest in commutativity is, fundamentally, an interest in the order-invariance of information, an updating sequence that does not violate such a principle at the more fundamental level of experiential information should not be deemed defective.  This paper claims that Lange's argument fails as a general defense of the Jeffrey framework.  Lange's argument entails that the inputs to the Jeffrey framework differ from those of classical Bayesian Conditionalization, in a way that makes them defective.  Therefore, either the Jeffrey framework is defective in virtue of not commuting its inputs, or else it is defective in virtue of commuting the wrong kinds of ones.

Higher-Order Beliefs and the Undermining Problem for Bayesianism, Acta Analytica (2019) [link]

Jonathan Weisberg has argued that Bayesianism's rigid updating rules make Bayesian updating incompatible with undermining defeat.  In this paper, I argue that when we attend to the higher-order beliefs we must ascribe to agents in the kinds of cases Weisberg considers, the problem he raises disappears.  Once we acknowledge the importance of higher-order beliefs to the undermining story, we are led to a different understanding of how these cases arise.  And on this different understanding of things, the rigid nature of Bayesianism's updating rules is no obstacle to its accommodating undermining defeat.

under review


This paper argues for a new account of Bayesian updating by taking a Retrospective Approach to diachronic coherence.  This approach says that an agent is diachronically coherent whenever the information she has revised her beliefs on satisfies whatever constraint we would want our evidence to satisfy.  This approach contrasts with a common way of thinking about the Bayesian framework, according to which it treats evidence as a black box.  The aim of this paper is to provide a different interpretation of Bayesianism's main updating constraint by filling in this black box with a Bayesian account of evidence.  


Recently some have challenged the idea that there are genuine norms of diachronic rationality.  Part of this challenge has involved offering replacements for diachronic principles.  Skeptics about diachronic rationality believe that we can provide an error theory for it by appealing to synchronic updating rules that mimic the behavior of diachronic norms.  In this paper, I argue that the most promising attempts to develop this position within the Bayesian framework are unsuccessful.  I defend a new synchronic surrogate of Conditionalization that draws upon some of the features of each of these earlier attempts.  At the heart of this discussion is the question of what exactly it means to say that one norm is a surrogate for another.  I suggest that surrogacy, in the given context, can be taken as a proxy for the degree to which formal and traditional epistemology can be made compatible.


Some claim that moral factors affect the epistemic status of our beliefs.  Call this the moral encroachment thesis.  It's been argued that the moral encroachment thesis can explain at least part of the wrongness of racial profiling.  The thesis predicts that the high moral stakes in cases of racial profiling make it more difficult for these racist beliefs to be justified.  This paper considers a class of racial generalizations that seem to do just the opposite of this.  The high moral stakes of the beliefs that we infer from these generalizations make it easier rather than harder for these beliefs to be justified.  I argue that the existence of this class of cases---cases of ``positive profiling''---give us reason to expand our account of moral encroachment, in a way that brings it closer to the ideal of pragmatic encroachment that motivates it in the first place.


Permissivism is the view that there is more than one rational response to a body of evidence. Impermissivism is the denial of this claim.  The debate between the permissivist and the impermissivist has proceeded, in large part, by way of arguing for the unattractiveness of the opposing position.  An exception is the argument from Dogramaci and Horowitz (2016), which attempts to defend impermissivism on ``positive'' grounds.  This paper develops what I will call the positive argument for impermissivism.  It goes on to argue that this argument faces a dilemma, one that generalizes the problems that famously arise for formal constraints, like the Principal of Indifference.  The aim of this paper is twofold then.  This paper undermines the argument from Dogramaci and Horowitz and shows why no positive argument for impermissivism is likely to succeed by uncovering surprising similarities between historical and contemporary arguments for this position.

in progress

A paper on normative modeling