Threshold Maintenance
During my Ph.D. the primary focus of my research has been on formal logic and conditional probabilistic semantics. In my dissertation, Threshold Maintenance and the Structure of Conditional Inference, I develop a framework for understanding classical logical principles in conditional probabilistic settings.
While the classical notion of validity is truth preservation across all contexts, the probabilistic analogue is preservation of sufficiently high conditional probability, where what is “sufficiently high” depends on the context. I use a threshold for the contextually determined level beyond which an inference is acceptable when understood in terms of conditional probability. I call the management of these certainty levels in reasoning and inference, “Threshold maintenance”.
My work is different from traditional work on connections between probability and logic, which has traditionally focused on axiomatization of what is true in all such systems of conditional probability with thresholds. My focus is rather on understanding what these systems must look like to sustain classical logic, at least in localized contexts. On this view, principles of classical logic serve as deep and well-motivated structural constraints on threshold maintenance. This is a project of rapprochement: taken one by one and hypothesis by hypothesis, principles of classical logic describe how the blooming and buzzing world of uncertainty is incrementally constrained.
I prove that probabilistic versions of conjunction introduction (Chapter 1) and conditional excluded middle (Chapter 2) are deep structural constraints on the underlying system of conditional probability equipped with thresholds. Probabilistic inference is a natural framework for non-monotonic logics. The other major system of non-monotonic logic is default logic, which has been made well known in philosophy in recent years due to the work of John Horty. Chapter 3 presents a critique of Horty’s work and a concomitant development of alternatives to his system within the general framework of default logic.
I intend initially to develop the chapters of my dissertation into independently publishable papers. Chapter 2 is currently under review. In what follows, I describe the three chapters in more detail and then say a few words about future projects.
-
I identify the circumstances in which conjunction introduction holds when rendered probabilistically. The indicative conditional is quantitatively expressed by the conditional probability of the consequent given the antecedent, provided that this probability exceeds a certain contextually determined threshold. Given that in many settings our conditionals satisfy classical patterns, it is natural to ask when and under what circumstances we can use conjunction introduction-- from “if A then B and if A then C” to “if. then both B and C” -- when these conditionals are understood in terms of conditional probability. It is natural to begin this investigation with conjunction introduction since it is so basic to classical inference.
I argue that these probabilistic circumstances are not random or chaotic but identifiable in terms of other basic canons of inference, particularly Leitgeb’s notion of stability. A belief is stable if, when given any additional consistent information, the degree of belief remains above the threshold. I prove that once an agent updates probabilistically on the antecedent, if everything the agent believes is logically entailed by a stable proposition with respect to her updated credences, then probabilistic conjunction introduction holds. However, I also show that stability imposes a stricter condition in terms of maintaining the overall consistency of one’s core beliefs. Stability applies globally in the sense that a set is stable if it remains highly probable when conditioned on every consistent proposition. In contrast, probabilistic conjunction introduction can be satisfied by a smaller set of beliefs that are locally consistent. The value of the threshold varies depending on the strength and amount of evidence available in each context. This allows for context-sensitive epistemic adjustments since sometimes evidence that may seem weak in one domain is considered stronger in another, particularly when it is the most reliable information available.
-
In chapter 2, I turn my attention to conditional excluded middle, namely “if A then B”, or not “if A then B”. I use the framework of threshold maintenance to mount a challenge to the view that there is a fundamental semantic distinction between indicative and counterfactual conditionals. I prove that a probabilistic version of conditional excluded middle holds if and only if the framework locally aligns with Stalnaker's semantics for counterfactuals. The alignment is local because it works antecedent by antecedent, which is reasonable since some antecedents may validate the conditional excluded middle while others may not. This contrasts to both pure classical logic and classic Stalnakerian semantics, where the conditional excluded middle holds globally, that is, it applies regardless of the specific antecedent and consequent.
-
This chapter shifts attention to the other preeminent framework for nonmonotonic logic, from Horty’s Reasons as Defaults (2012). This chapter examines contrary-to-duty conditional obligations within a Horty-like framework of default logic. In such a framework, the truth values of deontic statements are derived from defeasible reasons relevant to the specific context, represented as conditional statements that combine factual antecedents with deontic conclusions. Additionally, contrary-to-duty conditional obligations are special cases in which an agent becomes duty bound to commit to an alternative obligation upon failing to meet a primary one. For example, “If I don't find the courage to tell her the truth, at least I should avoid lying”. Due to their mixed nature, they are a source of several paradoxes for conditional reasoning in modal logic. I argue that contrary-to-duty conditionals hold only in sub-ideal situations and do not occur as instances of defeasible reasoning but as fixed violations. As such, a Horty-like default system lacks the expressive means to represent the violability of obligations within the object language. Therefore, a hybrid account of default reasons is required, which could incorporate a notion of ideality that is perspectival in nature. In this framework, the agent does not derive her obligations solely from the scenario’s binding reasons. But, with the help of an accessibility relation, she records what is ideal from the perspective of a deductively closed set.
Further Research:
-
To date, I have focused my research on formal logic and probabilistic reasoning. As I proceed, I intend to investigate how these formal systems may be applied to AI. A promising direction is to examine how context-dependent thresholds in AI can be calibrated to reflect sound reasoning and minimize bias.
Poorly calibrated probabilistic thresholds can yield biased outcomes in AI systems if decision points rest on models that fail to capture the complexities of real-world scenarios. One possible avenue for refining these thresholds involves applying logical frameworks like those developed in my research on threshold maintenance. More importantly, investigating how maintenance thresholds are set in high-stakes social contexts may improve fairness in AI decision-making. This could lead to decisions that are both logically consistent and less biased in areas such as criminal justice and healthcare, where decision thresholds carry serious practical consequences, especially for underrepresented communities.
Consider an AI-driven risk assessment tool used in criminal justice to predict recidivism. Such a system might consistently overestimate risk for certain individuals based on incomplete or biased data, which may reflect societal biases like over-policing or disproportionate incarceration rates. If the thresholds that trigger interventions—such as increased surveillance or harsher sentences—are based on skewed data, the AI system will perpetuate, or even amplify, existing inequalities. Thus, a central goal is to determine under what conditions AI systems should be permitted to make decisions based on probabilistic thresholds.
The process of threshold selection and adjustment is crucial in this context. Most classifiers produce probabilities or scores that can be translated into final decisions by setting a threshold. After a model is trained and its predictions evaluated, there is an opportunity to refine how probabilistic outputs translate into categorical decisions. Typically, classifiers assign instances to a “positive” category if their predicted probability exceeds 0.5, while those below this value are labeled “negative.” Thresholds are chosen based on performance metrics (e.g., accuracy, precision, recall) or cost considerations. By assessing changes in True Positive and False Positive Rates across different thresholds, it is possible to identify a setting that meets specific criteria—such as lowering false alarms or improving recall for high-stakes cases. Currently, many practitioners may apply calibration techniques to adjust probabilities so that predicted likelihoods match observed frequencies. However, these methods often treat fairness and consistency as afterthoughts. They may add fairness checks—comparing error rates across groups—or try different thresholds to see if disparities shrink. While these steps are useful, the framework developed in this research might aid in extending threshold selection beyond conventional performance metrics.
My approach introduces a more structured way to refine thresholds by using logical constraints derived from my theoretical work on conditional probabilistic inference. Instead of repeatedly guessing and checking new thresholds or relying solely on cost functions, the threshold maintenance framework suggests a set of logical conditions that the classifier’s conditional probabilities should satisfy if they are to produce stable and contextually appropriate decisions. When these conditions hold—which may only be in certain localized contexts—they provide a clear rationale for adjusting the threshold or flagging borderline cases for additional review. This means that while not universally applicable, these logical constraints can offer guidance in situations where standard metrics and ad hoc methods provide no clear direction.
-
A hybrid account of default reasoning would require that, in the ideal world, the agent’s primary obligations are not solely contextually generated by the scenario's binding reasons. Instead, these obligations would also arise from the accessibility relation, which records what is ideal from the perspective of a deductively closed set. Consequently, this suggests that coordination principles governing the interaction between reasons and "oughts" should be reorganized. Additionally, I am interested in attempting to reconcile the flexibility of non-monotonic reasoning with the stability of universal ethical principles through this hybrid account. My goal is to examine whether universalist frameworks, such as Kantian ethics or utilitarianism, can be interpreted as meta-rules supported by second-order reasons that inform the perspectival accessibility relation of ideality required for this hybrid account.