Have you ever been in an argument where the other person “just wasn’t being reasonable?” What were they doing that was “without reason?”
- Making “facts” up out of thin air?
- Forming connections between claims without appropriate evidence?
- Jumping between unrelated arguments?
- Not addressing the problem at hand?
How do you know that you were being reasonable and your partner was the unreasonable one? Is there an agreed-upon set of rules for what it means to operate “within reason” or is it simply a matter of perspective?
Mathematical logic is the study of formal logical reasoning. Coincidentally, logic itself is one of the foundations of both mathematics and computer science. Recall that mathematical prose contains propositions that assert relationships between definitions. These propositions are stated in terms of mathematical logic and thus form one of the cornerstones of the discipline.
As computer scientists, we care about logic for similar reasons. We care about the properties of the algorithms we design and implement, in particular, their correctness and complexity. Logic gives us a way of stating and reasoning about these properties in a precise manner.
Propositional Logic
Let us begin to formalize our intuition about propositions. First, how can we classify propositions? We can classify them into two buckets:
Atomic propositions that make elementary claims about the world. “It is beautiful today” and “it is rainy today” are both elementary claims about the weather.
Compound propositions that are composed of smaller propositions “It is either rainy today or it is beautiful today” is a single compound proposition made of up two atomic propositions.
This is not unlike arithmetic expressions that contain atomic parts—integers—and compound parts—operators. As you learned about arithmetic throughout grade school, you learned about each operator and how they transformed their inputs in different ways. Likewise, we will learn about the different logical operators—traditionally called connectives in mathematical logic—and how they operate over propositions.
We can formally define the syntax of a proposition \(p\) similarly to arithmetic using a grammar:
\[ p ::= A \mid \top \mid \bot \mid \neg p \mid p_1 \wedge p_2 \mid p_1 \vee p_2 \mid p_1 \rightarrow p_2. \]
A grammar defines for a sort (the proposition \(p\)) its set of allowable syntactic forms, each written between the pipes (\(\mid\)). A proposition can take on one of the following forms:
Atomic propositions, elementary claims about whatever domain we are considering. When we don’t care about the particular domain (e.g., we want to hold the domain abstract), we’ll use capital letters such as \(A\) as meta-variables for atomic propositions.
Top, \(\top\) (LaTeX:
\top
), pronounced “top”, the proposition which is always provable.Bot, \(\bot\) (LaTeX:
\bot
), pronounced “bottom” or “bot”, the proposition which is never provable.Logical negation, \(\neg p\) (LaTeX:
\neg
), pronounced “not”, which stands for proposition \(p\), but negated. For example, the negation of \(p\) = “It is beautiful today” is \(\neg p\) = “It is not beautiful today”.Logical conjunction, \(p_1 \wedge p_2\) (LaTeX:
\wedge
), pronounced “and”, which stands for the proposition where both \(p_1\) and \(p_2\) are provable. For example, the conjunction of \(p_1\) = “It is beautiful today” and \(p_2\) = “The sun is out” claims that it is both beautiful and the sun is out.Logical disjunction, \(p_1 \vee p_2\) (LaTeX:
\vee
), pronounced “or”, which stands for the proposition where at least one of \(p_1\) or \(p_2\) is provable. For example, the disjunction of \(p_1\) = “It is beautiful today” and \(p_2\) = “It is raining today” claims that it is either beautiful today or it is raining today (or both are true).Logical implication, \(p_1 \rightarrow p_2\) (LaTeX:
\rightarrow
), pronounced “implies”, which stands for the proposition where \(p_2\) is provable assuming \(p_1\) is provable. For example, if \(p_1\) = “It is cloudy” and \(p_2\) = “It is raining”, then \(p_1 \rightarrow p_2\) claims that it is raining, assuming that it is cloudy.
Notably, logical implication are the conditional propositions we have seen previously of the form “if … then … .” For example, if we had the following program correctness claim:
Claim: for all natural numbers n
, if (>= n 0)
then (> (factorial n) 0)
.
We might write it concisely using the formal notation above as:
Claim: for all natural numbers n
, (>= n 0)
\(\rightarrow\) (> (factorial n) 0)
.
This form of logic is called propositional logic because it focuses on propositions and basic logic connectives between them. There exist more exotic logics that build upon propositional logic with additional connectives that capture richer sorts of propositions we might consider.
Modeling with Propositions
Recall that mathematics is the study of modeling the world. In the case of mathematical logic, we would like to model real-world propositions using the language of mathematical propositions we have defined so far. This amounts to translating our informal natural language descriptions into formal mathematical descriptions.
To do this, keep in mind that mathematical propositions are formed from atomic and compound propositions. We then need to:
Identify the atomic propositions in the statement.
Determine how the atomic propositions are related through various compound propositions.
As an example, let’s consider a proposition more complicated than the previous ones that we have encountered:
"If it is either the case that the grocer delivers our food on time, or I get out to the store this morning, then the party will start on time and people will be happy.
First, we identify the domain under which we are making this proposition: preparing for a party. Thus, the atomic propositions are the statements that directly talk about preparing for a party. These propositions are:
- \(p_1\) = “The grocer delivers our food on time.”
- \(p_2\) = “I get out to the store this morning.”
- \(p_3\) = “The party will start on time.”
- \(p_4\) = “People will be happy.”
Now, we re-analyze the statement to determine how the propositions are related, translating those relations into logical connectives. You may find it helpful to replace every occurrence of the atomic propositions with variables to make the connecting language pop out:
“If it is either the case that \(p_1\) or \(p_2\) then \(p_3\) and \(p_4\).”
From this reformulation of the statement, we can see that:
- \(p_1\) and \(p_2\) are related by the word “or” which implies disjunction.
- \(p_3\) and \(p_4\) are related by the word “and” which implies conjunction.
- These previous two compound conjunctions are related by the words “if” and “then” which implies implication.
Thus we arrive at our full formalization of the statement:
\[ p_1 \vee p_2 \rightarrow p_3 \wedge p_4. \]
(Note that \((\rightarrow)\) has lower precedence than \((\vee)\) or \((\wedge)\) so the statement written as-is is not ambiguous.)
When translating statements in this manner, it is useful to keep in mind some key words that usually imply the given connective:
- \((\neg)\): “not”
- \((\wedge)\): “and”
- \((\vee)\): “or”
- \((\rightarrow)\): “implies”, “if …then”
Reasoning with Propositions
Recall that our goal is to formally model reasoning. While equivalences allow us to show that two propositions are equivalent, they do not tell us how to reason with propositions. That is, how do we prove propositions once they have been formally stated?
First, we must understand the reasoning process itself so that we can understand its parts. From there, we can state rules in terms of manipulating these parts. When we are trying to prove a proposition, whether it is in the context of a debate, an argument with a friend, or a mathematical proof, our proving process can be broken up into two parts:
- A set of assumptions, propositions that we are assuming are true.
- The proposition that we are currently trying to prove, our proof goal.
We call this pair of objects our proof state. Our ultimate goal when proving a proposition is to transform the proof goal into an “obviously” correct statement through valid applications of proof rules.
For example, if we are in the midst of a political debate, we might be tasked with proving the statement:
“80s-era Reaganomics is the solution to our current economic crisis.”
Our proof goal is precisely this statement and our set of assumptions includes our knowledge of Reaganomics and the current economy. We might take a step of reasoning—unfolding the definition of Reaganomics, for example—to refine our goal:
“Economic policies associated with tax reduction and the promotion of the free-market are the solution to our current economic crisis.”
We can then apply additional assumptions and logical rules to transform this statement further. Once our logical argument has transformed the statement into a self-evident form, for example:
“Everyone needs money to live.”
Then, if everyone agrees this is a true statement, we’re done! Because the current proof goal follows logically from assumptions and appropriate uses of logical rules, we know that if this proposition is true, then our original proposition is true. In a future reading, we will study these rules in detail in a system called natural deduction.
Propositional Logic Versus Boolean Algebra
Throughout this discussion, you might have noted similarities between propositions and a concept you have encountered previously in programming: booleans. Values of boolean type take on one of two forms, true
and false
, and there are various operators between booleans, &&
and ||
,logical-AND and logical-OR, respectively. It seems like we can draw the following correspondence between propositions and booleans (using the boolean operators found in a C-like language):
- \(\top\) is
true
. - \(\bot\) is
false
. - \((\neg)\) is
!
. - \((\wedge)\) is
&&
. - \((\vee)\) is
||
.
This correspondence is quite accurate! In particular, the equivalences we discussed in the previous section for propositions also work for boolean expressions. The only caveat is that there is no analog to implication for the usual boolean expressions, but the equivalence:
\[ p_1 \rightarrow p_2 \equiv \neg p_1 \vee p_2 \]
Allows us to translate an implication into an appropriate boolean expression.
So why do we talk about propositions when booleans exist? It turns out they serve subtle, yet distinct purposes!
- A proposition is a statement that may be proven.
- A boolean expression is an expression that evaluates to
true
orfalse
.
Note that a proposition does not say anything about whether the claim is true or false. It also doesn’t say anything computational in nature, unlike a boolean expression which carries with it an evaluation semantics for how to evaluate it in the context of a larger computer program.
Furthermore, the scope of a boolean expression is narrow: we care about booleans values insofar as they assist in controlling the flow of a program, e.g., with conditionals and loops. We don’t need to complicate booleans any further! In contrast, propositions concern arbitrary statements, and our language of propositions may need to be richer than the corresponding language of booleans to capture the statements we have in mind.
Exercise (Translation): Consider the following propositions
- \(p_1\) = “The sky is cloudy.”
- \(p_2\) = “I will go running today.”
- \(p_3\) = “I’m going to eat a cheeseburger.”
Write down English translations of these formal statements:
- \(p_2 \vee p_3\).
- \(p_1 \rightarrow p_3 \rightarrow p_2\). (Note: implication is a right-associative operator! This means that this statement is equivalent to \(p_1 → (p_3 → p_2)\)).
- \(p_1 \rightarrow p_2 \wedge \neg p_3\).
Exercise (Funny Nesting): Recall that implication is right-associative. That is, sequences of \((→)\) operators parenthesize “to the right”, i.e.,
\[ p_1 → p_2 → p_3 ≡ p_1 → (p_2 → p_3). \]
Now, consider the abstract proposition where we instead parenthesize to the left:
\[ (p_1 → p_2) → p_3. \]
Come up with concrete instantiations of \(p_1\), \(p_2\), and \(p_3\) and write down the English translation of this proposition.
First-order Logic
Propositional logic gives us a formal language for expressing propositions about mathematical objects. However, is it sufficiently expressive? That is, can we write down any proposition we might have in mind using propositional logic?
On the surface, this seems to be the case because we can instantiate our atomic propositions to be whatever we would like. For example, consider the following proposition:
Every month has a day in which it rains.
We can certainly take this entire proposition to be an atomic proposition, i.e.,
\[ P = \text{Every month has a day in which it rains}. \]
However, this is not an ideal encoding of the proposition in logic! Why is this the case? We can see that there is structure to the claim that is lost by taking the whole proposition to be atomic. In this case, the notion of “every” month is not distinguished from the rest of the proposition. This is a common idea in propositions, for example:
- For every natural number \(n\), \(n > 0\).
- For any boolean expression \(e\), \(e ⟶^* \mathsf{true}\).
- For any pair of people in the group, they are friends with each other.
We would like our formal language of logic to capture this idea of “every” as a distinguished form so that we can reason about it precisely. However, propositional logic has no such form!
To this end, we introduce a common extension to proposition logic, predicate logic (also known as first-order-logic), which introduces a notion of quantification to propositions. Quantification captures precisely this concept of “every” we have identified, and more!
Introducing Quantification
When we express the notion of “every month” or “any pair of people,” we are really introducing unknown quantities, i.e., variables, into our propositions. For example, our sample proposition above introduces two such variables, months and days. At first glance, we might try expressing the proposition as an abstract proposition, i.e., a function that expects a month and day and produces a proposition:
\[ P(x, y) = x\;\text{has}\;y ∧ \text{it rains during}\;y. \]
We call such functions that produce propositions predicates.
However, this encoding of our proposition is not quite accurate! If we think carefully about the situation, we will note that the interpretation of \(x\) and \(y\) are subtlety different! We can make this more clear by a slight rewriting of our original proposition:
For every month, there exists a day in which it rains.
Note how we are interested in “every month.” However, we are not interested in “every day”; we only identify a single day in which the property holds. The property may hold for more than one day for any given month, but we only care that there is at least one such day.
Thus, we find ourselves needing to quantify our variables in two different ways:
- Any value (of a given type).
- At least one value (of a given type).
Observe that our predicate notation \(P(x, y)\) does not tell us which quantification we are using!
First-order logic extends propositional logic with two additional connectives that make explicit this quantification:
\[ p ::= \ldots \mid A(x) \mid \forall x.\,p \mid \exists x.\,p \]
- Universal quantification of a proposition, written \(\forall x.\,p\) (pronounced “for all \(x\), \(p\)”, LaTeX:
\forall
) introduces a universally quantified variable \(x\) that may be mentioned inside of \(p\). - Existential quantification of a proposition, written \(\exists x.\,p\) (pronounced “there exists \(x\), \(p\)”, LaTeX:
\exists
) introduces an existentially quantified variable \(x\) that may be mentioned inside of \(p\).
These quantified variables may then appear inside of our atomic propositions. We write \(A(x)\) to remind ourselves that our atomic propositions may be parameterized by these quantified variables.
With quantifiers, we can now fully specify our example in first-order logic:
\[ P = ∀x \ldotp ∃y \ldotp x\;\text{has}\;y ∧ \text{it rains during}\;y. \]
The first quantifier \(∀x\) introduces a new variable \(x\) that is interpreted universally. That is, the following proposition holds for every possible value of \(x\). The second quantifier \(∃y\) introduces a new variable \(y\) that is interpreted existentially. That is, the following proposition holds for at least one possible value of \(y\). When taken together, our English pronunciation of this formal proposition now lines up with our intuition:
For all months \(x\), there exists a day \(y\) where \(x\) has \(y\) and it rains during \(y\).
Variables and Scope in Mathematics
Like function headers in a programming language, quantifiers introduce variables into our propositions. For example, consider the following Racket function definition:
(define length
(lambda (lst)
(match lst
['() 0]
[(cons head tail) (+ 1 (length tail))])))
Here, the lambda
form introduces a new program variable lst
that may be used anywhere inside of the function bound to length
. However, lst
has no meaning outside of length
. The location where a variable has meaning is called its scope:
Definition (Scope): the scope of a variable is the region in which that variable has meaning.
In the case of functions, the scope of a parameter is the body of its enclosing function. So if we try to evaluate lst
outside of the definition of length
, we receive an error:
> (length (list 1 2 3))
3
> lst
lst: undefined;
cannot reference an identifier before its definition
The same principles hold for logical variables. The quantifiers \(∀x \ldotp p\) and \(∃x \ldotp p\) introduce an appropriately quantified variable \(c\) that has scope inside of \(p\). \(x\) does not have meaning outside of \(p\). For example, the proposition:
\[ P = x > 5 ∧ ∀x \ldotp x ≠ 0. \]
Is malformed because the \(x\) outside the quantifier is not well-scoped.
Implicit Types and Quantification
Variables are always quantified according to a particular set of values, its type. However, you may have noticed that we never mention the type of a variable in its quantification. How do we know what the type of \(x\) is in \(∀x \ldotp p\)? Traditionally, we infer the types of variables based on their usage.
For example, in the proposition:
\[ ∀x \ldotp ∃y \ldotp x > y. \]
We can see that \(x\) and \(y\) are used in a numeric comparison \((>)\), so we assume that the quantified variables range over numbers. Could the variables be quantified more specifically? Certainly! \(x\) and \(y\) could certainly be real numbers, integers, or even the natural numbers (i.e., the positive integers or zero). We would need to look at more surrounding context, e.g., how this proposition is used in a larger mathematical text, to know whether the variables can be typed in more specific ways.
In addition to implicit types, sometimes quantification is also implicit. For example, the standard statement of the Pythagorean theorem is:
\[ a^2 + b^2 = c^2 \]
Where the quantification of \(a\), \(b\), and \(c\) is implicit. What is the implied quantification of these variables? It turns out that when a quantifier is omitted, we frequently intend for the variables to be universally quantified, i.e., the Pythagorean theorem holds for any \(a\), \(b\), and \(c\) of appropriate types.
When writing mathematics, you should always explicitly quantify your variables so your intent is clear. However, you should be aware that implicit typing and quantification is pervasive in mathematical texts, so you will need to use inference and your best judgment to make sense of a variable.
Exercise (Quantified Translation):
Consider the following parameterized propositions:
- \(A_1(x)\) = “\(x\) is a kitty.”
- \(A_2(x, y)\) = “\(x\) likes \(y\).”
- \(A_3(x)\) = “\(x\) is a dog.”
Translate these formal propositions into informal English descriptions:
- \(\exists x.\,\exists y.\,A_1(x) \wedge A_3(y)\).
- \(\forall x.\,\exists y.\,A_1(y) \rightarrow A_2(x, y)\).
- \(\forall x.\,\forall y.\,(A_1(x) \wedge A_3(y)) \rightarrow \neg A_2(x, y)\).