Research

In some way or other, all of my work is about how the formal and the normative make contact.  Much of my recent work has focused on how best to understand Bayesian diachronic coherence.  Here are abstracts (and some drafts) for some published and unpublished work.  

published

Bayesian Coherentism, Synthese, forthcoming [penultimate draft]

This paper considers a problem for Bayesian epistemology and goes on to propose a solution to it.  On the traditional Bayesian framework, an agent updates her beliefs by Bayesian conditioning, a rule that tells her how to revise her beliefs whenever she gets evidence that she holds with certainty.  In order to extend the framework to a wider range of cases, Richard Jeffrey (1965) proposed a more liberal version of this rule that has Bayesian conditioning as a special case.  Jeffrey conditioning is a rule that tells the agent how to revise her beliefs whenever she gets evidence that she holds with any degree of confidence.  The problem?  While Bayesian conditioning has a foundationalist structure, this foundationalism disappears once we move to Jeffrey conditioning. If Bayesian conditioning is a special case of Jeffrey conditioning then they should have the same normative structure.  The solution? To reinterpret Bayesian updating as a form of diachronic coherentism.

Commutativity, Normativity and Holism: Lange Revisited, Canadian Journal of Philosophy 50 (2): 159-173. 2020. [link]

 

Lange (2000) famously argues that although Jeffrey Conditionalization is non-commutative over evidence, it's not defective in virtue of this feature.   Since reversing the order of the evidence in a sequence of updates that don't commute does not reverse the order of the experiences that underwrite these revisions, the conditions required to generate commutativity failure at the level of experience will fail to hold in cases where we get commutativity failure at the level of evidence.  If our interest in commutativity is, fundamentally, an interest in the order-invariance of information, an updating sequence that does not violate such a principle at the more fundamental level of experiential information should not be deemed defective.  This paper claims that Lange's argument fails as a general defense of the Jeffrey framework.  Lange's argument entails that the inputs to the Jeffrey framework differ from those of classical Bayesian Conditionalization, in a way that makes them defective.  Therefore, either the Jeffrey framework is defective in virtue of not commuting its inputs, or else it is defective in virtue of commuting the wrong kinds of ones.

Higher-Order Beliefs and the Undermining Problem for Bayesianism, Acta Analytica (2019) [link]

Jonathan Weisberg has argued that Bayesianism's rigid updating rules make Bayesian updating incompatible with undermining defeat.  In this paper, I argue that when we attend to the higher-order beliefs we must ascribe to agents in the kinds of cases Weisberg considers, the problem he raises disappears.  Once we acknowledge the importance of higher-order beliefs to the undermining story, we are led to a different understanding of how these cases arise.  And on this different understanding of things, the rigid nature of Bayesianism's updating rules is no obstacle to its accommodating undermining defeat.

under review

TITLE REMOVED  

This paper argues for a new account of Bayesian updating by taking a Retrospective Approach to diachronic coherence.  This approach says that an agent is diachronically coherent whenever the information she has revised her beliefs on satisfies whatever constraint we would want our evidence to satisfy.  This approach contrasts with a common way of thinking about the Bayesian framework, according to which it treats evidence as a black box.  The aim of this paper is to provide a different interpretation of Bayesianism's main updating constraint by filling in this black box with a Bayesian account of evidence.  

TITLE REMOVED 

 

Recently some have challenged the idea that there are genuine norms of diachronic rationality.  Part of this challenge has involved offering replacements for diachronic principles.  Skeptics about diachronic rationality believe that we can provide an error theory for it by appealing to synchronic updating rules that, over time, mimic the behavior of diachronic norms.  In this paper, I argue that the most promising attempts to develop this position within the Bayesian framework are unsuccessful.  I show that the synchronic updating rules that Meacham (2010) and Hedden (2015a, 2015b) propose as surrogates for the diachronic norm of Conditionalization each fail for different reasons.  I conclude by proposing a new synchronic surrogate of  Conditionalization that draws upon some of the features of each of these earlier attempts.

 

TITLE REMOVED  

Some claim that moral factors affect the epistemic status of our beliefs.  Call this the moral encroachment thesis.  It's been argued that the moral encroachment thesis can explain at least part of the wrongness of racial profiling.  The thesis predicts that the high moral stakes in cases of racial profiling make it more difficult for these racist beliefs to be justified.  This paper considers a class of racial generalizations that seem to do just the opposite of this.  The high moral stakes of the beliefs that we infer from these generalizations make it easier rather than harder for these beliefs to be justified.  I argue that the existence of this class of cases---cases of ``positive profiling''---give us reason to expand our account of moral encroachment, in a way that brings it closer to the ideal of pragmatic encroachment that motivates it in the first place.

TITLE REMOVED 

This paper argues that Bayesians have accuracy-based reason to revise their beliefs with updates that commute.  A set of Bayesian updates that commutes does better, from the perspective of the value of accuracy, than a set of updates that fails to satisfy this constraint. 

 

 

 

TITLE REMOVED 

Permissivism is the view that there is more than one rational response to a body of evidence.  Impermissivism is the denial of this claim.  Much of the debate between the permissivist and the impermissivist has proceeded by way of arguing for the unattractiveness of the opposing position.  An exception is the argument from Dogramaci and Horowitz (2016), which attempts to defend impermissivism on ``positive'' grounds by pointing to an ideal the impermissivist is able to realize.  This paper raises a dilemma for positive arguments for impermissivism, one that generalizes the problems that famously arise for formal constraints, like the Principal of Indifference.  It goes on to show that the argument from Dogramaci and Horowitz (2016) faces this dilemma.  The aim of this paper is twofold then.  The paper undermines the argument from Dogramaci and Horowitz (2016) and shows why no positive argument for impermissivism is likely to succeed. 

in progress