Preamble
If you are unfamiliar with my previous horrendous work in Philosophy with mathematics injected like a bad steroid, take a look at my previous blogpost:
Similar to that one, I wrote the following “essay” as my final paper for my Modal Logic course. I repeat: There is a lot of background and context that will be missing from the reading itself, and I am too lazy to add it for this blogpost. Nonetheless, I really liked the topic and the esoteric nature of applying philosophical tools to the least expected areas, in this case math.
I don’t expect this to be an easy read, because of the technical background and my poor writing skills. Still, I hope the average reader can get something out of this, ideally what plausability models are.
Throughout the text I reference Halpern’s plausability paper, and it’s because it’s the paper which my entire text is based on! I suggest giving that a read before mine as it will give a more in-depth insight as to what I’m rambling about.
Introduction
Epistemic modal logic provides a useful tool for answering philosophical problems dealing with notions of knowledge and belief. The doxastic notion of modal logic in particular is rich in expressing propositions of the form `p` is believed to be true". These logics are typically underlined by possible world semantics1, in which models are viewed as Kripke frames. Kripke frames have shown great use outside of philosophy in computer science, by using their flexible structure to automate the reasoning of truthiness with a given set of predicates in areas such as formal verification and artificial intelligence. The development of these technologies has exposed naivete in the understanding of degrees of belief while reasoning about uncertainty. This has led to extensions in modal logic to include a notion of possibility, which follows from a set of probabilistic axioms and attempts to encode intuition towards the truthiness of a proposition. With the rise of Large Language Models (LLMs), there has been an increasing need to encode semantic understanding outside of our current measure of probabilistic syntactic meaningfulness, to avoid the utterance of “correct” sentences with no logic behind them.
More precisely, when asking a statement of the form “I believe that John believes that φ”, we desire a qualitative way of automating the reasoning of these propositions. The greater the degree of uncertainty in a given statement, the more subjective a truth value becomes, and the more dependent a rational answer is on the disposition of an agent. Such a framework extends beyond philosophy of language and the creation of sensical utterances. Instead, we focus on the implications of automatically asserting reasonable truth values to epistemic questions. Trying to automate reasonable dispositions is not just a matter of computer science, as the automation of such can provide a useful tool for analytical philosophy and proof-checking philosophical arguments for their soundness.
Plausibility measures as introduced by Halpern are an attempt to generalize the Dempster-Shafer belief functions and provide qualitative reasoning in domain-specific questions on an “as needed” basis. Plausibility measures give us insight into essential features of the propositions in evaluation while serving as a general framework for reasoning about uncertainty. In this paper, we describe attempts to encode the power of plausibility measures into epistemic modal logic and justify the usefulness of such a framework through the lottery paradox. We also bring up Wittgensteinian concerns with Kripke's general framework for asserting truthiness and the needed assumptions about truth needed to make Kripke models sound. We show that the relaxation of these assumptions should not impede their use case, as they can still provide a lightweight and general approach for reasoning about uncertainty in the pragmatic case, aided by a series of more sophisticated measures and structures for domain-specific problems, such as categorical grammars for reasoning about uncertainty in philosophy of language and coalgebraic semantics for epistemology.
Modal logic as measures
Kripke frames give us the form,
where W denotes the set of possible worlds, R is a binary relation on W and serves as an accessibility relation, and V is a multivalued mapping from the set of atomic propositions to W, known as the value assignment function. Due to the fact that an accessibility relation R can be regarded as a reflexive multivalued mapping — from W to W — it is important to note that it has been shown by Tsiporkova that foregoing expressions for the truth sets of modal propositions can be rewritten in terms of inverse and superinverse.
Let F be a multivalued mapping from a universe X into a universe Y. The domain of F is thus:
The formal definition of image used here is that the image of B under F is the subset:
The superinverse follows:
images under R of truth sets of non-modal propositions:
This formalization helps us deal with the conditional probabilities of truth sets for a subset of logical expressions involving possibilitations and necessitations, which underscore modal logic in measure theory (the inverse and superinverse images relate to these operations respectively.) But why is it important to have this equivalency? As one would expect, this model is not robust enough to deal with questions about uncertainty, at least in an elegant manner. Taking a step back from formalisms, we can gain some intuition for how probability helps evaluate basic statements. First, we have to understand that this is a subjective notion of probability, and we are seeking to measure our disposition towards a rational action. This disparity in degrees of belief is what we seek to model as a probability measure2.
The need for probability
Let us consider two separate statements “It rained last week”, when it did not rain last week, and “The moon is made out of cheese”, when in fact, we all know the moon to not be made out of cheese. Classically in our model for epistemic modal logic, both of these statements would have false truth values when translated and evaluated. However, intuitively it becomes clear that the former is easier to accept than the latter, that is we have a disposition to reject that “The moon is made out of cheese” with a larger margin of disbelief. Licato has shown a similarity measure3 to for reasoning about this intuition in an automated manner.
Probability measures are strong because of the flexibility in domain-specific problems. Boeva suggests an extension of our Kripke frames to include a probability measure
If we consider a universe X with propositions of the form:
Our new model is now of the form:
such that
There is an important assumption to allow this extended framework to work, for
there is one and only one proposition that is true in each world. This is known as the Single Value Assumption, which implies that
are always true in M (following our previous equivalency for non-modal propositions into modal logic).
This notion of probability in modal logic is useful, as we can now inject a probability measure into a model as required and even automate intuitive reasoning for an arbitrary set of propositions. This is the first step towards dealing with uncertainty in a sophisticated manner. We can assert a stronger truth value for “It rained last week” and a weaker truth value for “The moon is made out of cheese”, even if we have the same amount of evidence for both at the time, assuming we have a disposition to believe one over the other.
Plausibility measures
The quantitative approach of probability measures becomes unsatisfactory when dealing with more nuanced dilemmas, not to mention there are a set of probability axioms (which we will not discuss for brevity) that limit the use of such models. As an attempt to ameliorate this, there have been other attempts to generalize separate notions of necessity and possibility (such as possibility measures, and belief functions) to encompass better the need for Bayesian reasoning in belief systems. Halpern introduces plausibility measures, a form of representing information qualitatively. Plausibility measures associate events to their respective plausibility, which is algebraically characterized as some partially ordered set. These representations are nontrivial, as choosing the appropriate set of possible worlds dynamically4 by just reasoning under a set of possible worlds chosen from an interval $[0,1]$. Plausibility measures remove this structure and relax the need to examine certain properties of interest.
A plausibility measure lives in a plausibility space, a generalization of a probability space. Here, rather than a mapping of sets in
we map elements to some arbitrary partially ordered set. Formally, a plausibility space is characterized by the tuple
where W is the set of worlds, F is an algebra of subsets of W, D is a domain of plausibility values partially ordered by a relation
We then take the form
as “B is at least as plausible as A''. Thus, we can start evaluating the truthiness of the following assertion given a set of assumptions:
We have to assume that
meaning that a set is at least as plausible as any of its subsets. For this to be true, we can have a toy set with an assumption that “If the moon was made out of cheese, then it would always be raining”. Thus, A is a subset of B. For more sophisticated assumptions, we do not have the subset implication to give us a straight answer, but we can gain a rational evaluation given a set of assumptions, such as those provided through big data models.
Plausibility measures as modal logic
Research by Halpern has shown an algebraic understanding of plausibility measures for their use in reasoning about uncertainty. However, the rich structure of doxastic logic seems promising for understanding encoding plausibility measures as Kripke models. Boeva suggests the construction of a minimal model of modal logic for given plausibility and belief measures. We can take our previous notion of a Kripke frame extended with a probability measure, and induce a plausibility measure as:
For necessity, Halpern showed that the dual measure of a plausibility measure is a belief measure (sometimes referred to as a necessity measure) of the form:
Finally, Boeva provides us with an important theorem that asserts:
induces a plausibility measure and a belief measure on P(X), defined by
are plausibility and belief measures induced by R on P(W). This gives us the framework we need to describe a Kripke frame of the form:
Such that it satisfies the weak singleton valuation assumption, that at least one proposition is true in each world. This framework helps us discard nonsense assertions and provides a flexible understanding of incoming information to adjust our disposition.
Lottery paradox
We can now see the effect of such a model under a paradox embedded with uncertainty. Let Γ be a perfectly accurate description of a 1,000,000,000-ticket lottery, for which there exists a rational agent $a$ who is fully apprised. Assume that from Γ it can be proven that at most one ticket will win. The paradox now asserts that since a believes all propositions in Γ to be true, a can deduce from this belief that there is at least one ticket that will win. However, with our probability measure, we can take into account that the probability of winning a particular ticket
Our agent a is then dispositioned to believe that the ticket t1 won't win, which serves as a valid belief. Reasoning about this uncertainty under our model, we can make use of the plausibility relation to induce two new inferences to provide a notion of strength to this belief.
First, we assume that a is certain of all propositions in Γ, then we can describe certainty as a property that an agent a at time t is certain about
is beyond reasonable doubt (returns a value of 1 in the probability measure) and there is no φ such that believing ψ is more reasonable for a at time t than believing φ5. Now, since the probability of winning is less than the probability of not winning, we have that for all ti there exist some presumptions in favor of a not winning now. But, since we had that the probability of a ticket winning is so small, we have that there exists some presumption in favor of there not existing a time t for which an agent a wins.
The belief structure here is more nuanced, as we have now introduced notions of possibility and plausibility to ameliorate the naive understanding that it is possible for an agent to both have a chance to win and not have a chance to win. With Kripke frame induced on a normal probability measure, we would also end in naive understanding, as we aren't dispositioned to believe certainty beyond a reasonable doubt non-numerically.
Wittgensteinian concerns
Although this formalization of Kripke semantics helps refine issues with uncertainty in epistemic logics, there is a Wittgensteinian concern that the semantics are embedded with what has been termed: “the ghost of Tarski" (as shown by Horsten). This is because, with plausibility measures, we claim that we can determine the truth value of a liar sentence with Bayesian reasoning. Kripke notes that although his formalizations contain their own truth predicates, there are natural notions of untruth that they cannot express about themselves, that one must, ascent to a higher level. This is a fundamental issue with a first-order understanding of these logics. De Sousa claims that in an attempt to solve this problem, we should allow compositional rules to determine the truth value of a sentence in terms of the semantic values of its words to have exceptions. We maintain that the compositional nature of plausibility measures ameliorates this issue, as we do not seek to derive a probabilistic outcome to an utterance, rather we make use of certainty to be domain-specific. This can be characterized by the Montague-like separation of a semantic algebra (as exposed in this paper) and a syntactic notion meaning in philosophy of language. Because we now have a direct bridge connecting our modal operators of necessity and possibility to algebraic structures of domain and codomain, we can make use of structures such as categorical grammars (which exhibit a strong relationship between syntax and semantic composition). Another example is the unification of modal logics through coalgebraic semantics as described by Kupke.
Conclusion
We show that Kripke frames induced by plausibility measures serve as a general framework to make use of domain-specific problems in philosophy that make use of composition. We suggest the further use of these general modal logics with domain-specific approaches are the rational approach for solving philosophical paradoxes and pragmatic problems in reasoning about uncertainty automatically.
The scope of this paper will only take into consideration epistemically possible worlds and not metaphysically possible worlds, nor try to make any meaningful connection between the two.
I will continue to use the term probability measure to refer to the overall idea of using a measure and assigning value 1 to the entire probability space, hence I am assuming there exists some arbitrary set function which returns an interval from the set [0,1].
I will not be showing the measure in this paper for the sake of space, but it's important to note other non-probabilistic measures have been used to deal with this sort of analogical reasoning.
I will be using the word dynamic to represent a Bayesian notion of incoming information changing an agent's knowledge.
This notion of certainty is characterized as an axiom in plausibility measures.