Survey
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
* Your assessment is very important for improving the workof artificial intelligence, which forms the content of this project
Bulletin of the Section of Logic Volume 41:3/4 (2012), pp. 173–184 François Lepage PARTIAL PROBABILITY FUNCTIONS AND INTUITIONISTIC LOGIC∗ Abstract We first present a sound and complete system of intuitionistic logic augmented with Nelson’s strong negation and interpreted in Kripke’s models’ structure. Then, we show that intuitionistic logic can naturally be interpreted in a modal trivalent logic (propositions are true, false or undefined). Secondly, we introduce a partial probability interpretation, which is inspired by Popper’s conditional probability functions and are characterized by the fact that conditions are not propositions but rather sets of propositions. Finally, we define the notion of probabilistic validity and we show using Kripke’s models that the intuitionistic system is sound and complete. 1. Motivation and background One starting point of this paper is partial logic. In [5] and [6], Lapierre and I showed that partial logic can be interpreted as a three-valued logic, the third value being the undefined value. Following Thijsse [13], we used Saturated deductively closed consistent sets (SDCCS) for the completeness proof. A set is SDCCS if it is deductively closed and if it contains (A ∨ B) only if it contains either A or B. If we restrict ourselves to partial propositional logic, the following system is proved to be sound and complete for Kleene strong 3-valued logic. ∗ This work was supported by the Social Sciences and Humanities Research Council of Canada. Thanks to Grzegorz Malinowski who found a flaw in the first version of this paper. 174 François Lepage A derivation of A from Γ ( Γ ` A ) is a sequence of propositions A0 , ..., An such that each Ai satisfies one of the following clauses: R. If Ai ∈ Γ, then Γ ` Ai D1. If Γ ` B ∧ ¬B, then Γ ` Ai D2. If Γ `∼∼ Ai , then Γ ` Ai D3. If Γ ` Aj then, Γ ` Ai (where Ai is ∼∼ Aj ) D4. If Γ ` Ai ∧ B, then Γ ` Ai D5. If Γ ` B ∧ Ai , then Γ ` Ai D6. If Γ ` B and Γ ` C, then Γ ` Ai (where Ai is B ∧ C) D7. If Γ ` Ai , Γ ` Ai ∨ B D8. If Γ ` Ai , Γ ` B ∨ Ai D9. If Γ `∼ (C ∧ B), then Γ ` Ai (where Ai is ( ∼ C∨ ∼ B)) D10. If Γ `∼ (C ∨ B), then Γ ` Ai (where Ai is ( ∼ C∧ ∼ B)) D11. If Γ ∪ {B} ` Ai and Γ ∪ {C} ` Ai , then Γ ∪ {B ∨ C} ` Ai This system has no theorem and the deduction theorem does not hold. As it has been noted by Frankowski [3], there are strong bounds between partial logic and instuitionistic logic. Actually, it can be shown that the aforementioned deduction rules are valid in Nelson’s intuistionistic logic with strong negation [11]. Moreover, Aczel [1] showed that Nelson’s system is sound and completed for Kripke’s semantics [4] where possible worlds are SDCCS. The second starting point of this paper is the notion of Probabilistic interpretation. It is Popper [12] who first presented an axiomatization of a probability calculus which takes a two-places probability function as a primitive notion. The two important features of Popper’s approach are that conditionalization on sentences that have a 0 probability is always defined and the classical absolute probability functions can be considered as a special case of conditionalization on a tautology. However, the great breakthrough in the probabilistic interpretation of the logical calculus is Hartry Field’s [2] axiomatization of the notion of probabilistic validity without making use of the notion of truth-value and more generally without using any classical semantic notion that makes use of truth functions. Field’s axiomatization of conditional probability for both propositional calculus and first order predicate calculus can be used to prove the soundness and completeness of their usual axiomatization. There are many ways to provide axiomatizations of conditional probability functions. The most general is probably that of Morgan [8], [9] which Partial Probability Functions and Intuitionistic Logic 175 uses two-place probability functions where the second place is a set of sentences instead of a single sentence. Morgan showed that every consistent extension of classical logic is complete for a given class of probabilistic interpretation. Moreover, Morgan and Leblanc [10] showed that intuitionistic logic is sound and complete for a given class of probabilistic interpretations. The problem is that for their interpretations, the probability is always defined and any non-consequence of Γ has a probability 0 in the canonical model that they used for the completeness proof, which is quite unnatural. Besides, Morgan and I [7] showed that partial logic (which is not an extension of classical logic) is complete for a given class of partial probability functions. In the rest of this paper, I will show how these partial probability functions can be used to provide sound and complete interpretations for Nelson’s logic. The two main feature of my proposal are 1. Pr(A∨ ∼ A, Γ) = 1 iff Pr(A, Γ) is defined (and A is not B → C)1 and Pr(A∨ ∼ A, Γ) is undefined iff Pr(A, Γ) is undefined. 2. The probabilistic interpretation of the intuitionistic conditional is a partial conditional probability, i.e. Pr(A → B, Γ) = Pr(B, Γ ∪ {A}) when defined. In [10], Pr(A → B, Γ) = Pr(B, Γ ∪ {A}), but it is always defined. 2. Syntax Let PI (for Partial Intuitionistic) be the following system, which is based on four connectives ∼, ∧, ∨ and → ) and where ∼ is Nelson’s strong negation. The axioms are I1 I2 I3 I4 I5 I6 I7 I8 A → (B → A) (A → (B → C)) → ((A → B) → (A → C)) A∧B →A A∧B →B A→A∨B B →A∨B (A → C) → ((B → C) → (A ∨ B → C)) F→A 1 For the justification of this restriction, see the closing remarks of the paper. 176 François Lepage PI1 ∼∼ A → A PI2 A →∼∼ A PI3 ∼ (A ∧ B) → ∼ A∨ ∼ B PI4 ∼ (A ∨ B) →∼ A∧ ∼ B PI5 A∧ ∼ A → F PI6 ∼ (A → B) → (A∧ ∼ B) PI7 (A∧ ∼ B) →∼ (A → B) PI8 ∼ ¬A → A PI9 A → ∼ ¬A PI20 ∼ A → (A → B) PI21 A → (B → (A ∧ B)) PI22 ∼ A∨ ∼ B →∼ (A ∧ B) PI23 ∼ A∧ ∼ B →∼ (A ∨ B) where ¬A stand for A → F, F is ∼ (A → (B → A)) and MP is the only rule. The deduction theorem holds in PI since its proof needs only I.1 and I.2. A Saturated Deductively Closed Consistent Set (SDCCS) Γ is a set which is: 1. Saturated, i.e. (A ∨ B ) ∈ Γ iff A ∈ Γ or B ∈ Γ ; 2. Deductively closed, i.e. A ∈ Γ iff Γ A; 3. Consistent, i.e. there is an A such that Γ A. Proposition 1. For any consistent set ∆, there is a SDCCS Γ such that ∆ ⊆ Γ. By extension, if Γ is inconsistent, S(Γ) will be the set of all sentences. 3. Semantics We now turn our attention to the question of probability distributions. Of course the main idea is that a partial probability distribution should be just like a total ordinary probability distribution but undefined for some arguments. But while we want to allow gaps in the distributions, these gaps are not totally arbitrary. Essentially, while we allow gaps, we require that any value that can be computed from values that are given cannot be left undefined. More precisely, Partial Probability Functions and Intuitionistic Logic 177 Definition 1. A partial probability function Pr is a partial function Pr : L × 2L → [0, 1] such that the following constraints are always satisfied: P.1 If A ∈ Γ, then Pr(A, Γ) is defined; P.2 If Pr(A, Γ) is defined, then Pr(∼ A, Γ) is defined; P.3 If Pr(∼ A, Γ) is defined, then Pr(A, Γ) is defined; P.4 If Pr(A ∧ B, Γ) is defined, then Pr(B ∧ A, Γ) is defined; P.5 If Pr(A, Γ) and Pr(B, Γ) are undefined Pr(A ∧ B, Γ) is undefined; P.6 If Pr(A, Γ) = 0, then Pr(A ∧ B, Γ) = 0 ; P.7 If Pr(A, Γ) = 1, then Pr(A ∨ B, Γ) = 1 ; P.8 If Pr(A∧B, Γ) is defined, and Pr(A, Γ) is undefined, then Pr(B, Γ) = 0; P.9 If Pr(A∨B, Γ) is defined, and Pr(A, Γ) is undefined, then Pr(B, Γ) = 1. And when all the appropriate values of Pr are defined, the following constraints are satisfied: NP.1 0 ≤ Pr(A, Γ) ≤ 1; NP.2 If A ∈ Γ, then Pr(A, Γ) = 1 ; NP.3 Pr(A ∨ B, Γ) = Pr(A, Γ) + Pr(B, Γ) - Pr(B ∧ A, Γ) ; NP.4 Pr(A ∧ B, Γ) = Pr(A, Γ) × Pr(B, Γ ∪ {A}) ; NP.5 Pr(∼ A, Γ) = 1- Pr(A, Γ) provided Γ is Pr-normal (i.e., there is at least an A such that Pr(A, Γ) = 0); NP.6 Pr(A ∧ B, Γ) = Pr(B ∧ A, Γ) ; NP.7 Pr(C, Γ ∪ {A ∧ B}) = Pr(C, ∪{A, B}) ; NP.8 Pr(A → B, Γ) = Pr(B, Γ ∪ {A}). The following elementary results will be useful. Proposition 2. E.1 If Pr(A, Γ) = 1, then Pr(A ∧ B, Γ) = Pr(B, Γ) if Pr(B, Γ) is defined, undefined otherwise. E.2 If Pr(A, Γ) = 1, then Pr(B, Γ) = Pr(B, Γ ∪ {A}) if Pr(B, Γ) is defined, undefined otherwise. E.3 If Pr(C, Γ ∪ {A}) = 1 and Pr(C, Γ ∪ {B}) = 1 and Pr(A ∨ B, Γ) = 1, then Pr(C, Γ) = 1. E.4 For all A, B and Γ, Pr(B, Γ∪{A}) = 1 iff for all ∆, if Pr(A, Γ∪∆) = 1, then Pr(B, Γ ∪ ∆) = 1. 178 François Lepage E.5 If Γ is Pr-normal but Γ ∪ {A} is Pr-abnormal, then Pr(A, Γ) = 0. E.6 Pr(A ∨ B, Γ) ≥ Pr(A, Γ) when Pr(A ∨ B, Γ) if defined. We now define the semantic consequence relation, based on partial probability distributions. We use the standard intuition that Γ semantically implies A if and only if no matter what your belief system is, there is nothing we could add to Γ that would make you doubt A. Definition 2. A is a semantical consequence of Γ (written Γ A ) if and only if for all probability distributions Pr, Pr(A, Γ ∪ ∆) = 1 for all ∆. 4. Soundness We will need the following lemma (the semantical counterpart of DT). Lemma 1. Γ A → B iff Γ ∪ {A} B Proof: Γ A → B iff for any Pr and any ∆, Pr(A → B, Γ ∪ ∆) = 1 iff for any Pr and any ∆, Pr(B, Γ ∪ ∆ ∪ {A}) = 1 iff for any Pr and any ∆, Pr(B, Γ ∪ {A} ∪ ∆) = 1 iff Γ ∪ {A} B. Proposition 3. PI is sound according to probabilistic interpretation, i.e. if Γ ` A, then Γ A Proof: Let us suppose that Γ ` A. Then there is a proof from Γ {A0 , ..., An } ` A We prove by induction on i that Γ Ai . Basis: we have to check that (i) if A0 ∈ Γ or (ii) A0 is an axiom, then Pr(A0 , Γ ∪ ∆) for all Pr, Γ and ∆. (i) follows from NP.2. (ii) We have to check that every axiom is valid. (iii) MP transmits the value 1. We will give three examples, the other cases are left to the readers. I1 A → (B → A) By definition of `, Γ ` A → (B → A). Using DT twice we have Γ ∪ {A} ∪ {B} ` A. But Γ ∪ {A} ∪ {B} A, i.e. for any Pr and ∆, Partial Probability Functions and Intuitionistic Logic 179 Pr(A, Γ ∪ {A} ∪ {B} ∪ ∆) = 1 which is always the case by NP.2. By applying the lemma at the beginning of this section Γ A → (B → A). PI6 ∼ (A → B) → (A∧ ∼ B) We have to show that for all Pr, A and Γ, if Pr(∼ (A → B), Γ ∪ ∆) = 1, then Pr(A∧ ∼ B, Γ ∪ ∆) = 1 for all ∆. Again, we have two cases: (i) Γ ∪ ∆ is Pr-abnormal. Then Pr(A∧ ∼ B, Γ ∪ ∆) = 1. (ii) Γ∪∆ is Pr-normal. Let us suppose that Pr(∼ (A → B), Γ∪∆) = 1. Then, by NP.5, Pr(A → B, Γ ∪ ∆) = 0 and Pr(B, Γ ∪ ∆ ∪ {A}) = 0. Now, Pr(A∧ ∼ B, Γ ∪ ∆) = Pr(A, Γ ∪ ∆) × Pr(∼ B, Γ ∪ ∆ ∪ {A}) = Pr(A, Γ ∪ ∆). So we have to prove that Pr(A, Γ ∪ ∆) = 1. Suppose that Pr(∼ A, Γ ∪ ∆) 6= 0. By E.5, Γ ∪ ∆ ∪ {∼ A} is Pr-normal and Pr(A → B, Γ ∪ ∆ ∪ {∼ A}) = 0 and thus Pr(B, Γ ∪ ∆ ∪ {∼ A, A}) = 0 which is impossible because Γ ∪ ∆ ∪ {∼ A, A} is Pr-abnormal. So Pr(∼ A, Γ ∪ ∆) = 0 and by NP.5, Pr(A, Γ ∪ ∆) = 1. PI9 ` A → ∼ ¬A By E.4, we have to show that for all Pr, A and Γ, if Pr(A, Γ ∪ ∆) = 1, then Pr(∼ ¬A, Γ ∪ ∆) = 1 for all ∆. We have two cases: (i) Γ is Pr-abnormal. Then Pr(∼ ¬A, Γ ∪ ∆) = 1. (ii) Γ ∪ ∆ is Pr-normal. Let us suppose that Pr(A, Γ ∪ ∆) = 1. We know that Pr(F, Γ ∪ ∆) = 0. By E.1, Pr(A → F, Γ ∪ ∆) = Pr(F, Γ ∪ ∆ ∪ {A}) = 0 unless Γ ∪ ∆ ∪ {A} is Pr-abnormal. But in that case by E.5, Pr(A, Γ) = 0 which contradicts the hypothesis. So Pr(¬A, Γ ∪ ∆) = 0 and Pr(∼ ¬A, Γ ∪ ∆) = 1. 5. Completeness In order to show that if Γ A then Γ ` A, we have to show that if Γ A and Γ0A, we have a model of Γ, for which A is not true, i.e. is false or undefined. Let W be the set of all SDCCS sets of PI. The following propositions will be usefull. Proposition 4. W is partially ordered by ⊆ and for any ∆ ∈ W , if A → B ∈ ∆, then for any ∆0 such that ∆ ⊆ ∆0 , if A ∈ ∆0 then B ∈ ∆0 . (We do not show it because it is nowadays a classic.) 180 François Lepage Proposition 5. Let Γ be a consistent set such that Γ0P I A → B. There is a saturate set Γ0 such that Γ ⊆ Γ0 and A ∈ Γ0 and B ∈ / Γ0 . Proof: Clearly, Γ ∪ {A}0P I B (the deduction theorem holds in PI ). So, starting from Γ ∪ {A} we define Γ0 in the following way. Let E1 , ..., En , ... be a canonical enumeration of the formulas of PI in which each formula appears denumerably many times. Let Γ0 , ..., Γi , ... be the following sequence of sets: Γ0 = Γ ∪ {A} .. . Γi if Γi `P I B Γ ∪ {E } if Γ ` E and E is not C ∨ D i i i PI i i Γi+1 = Γ ∪ {E , C} if Γ ` E and Ei is C ∨ D and Γi ∪ {Ei , C} 0P I B i i i PI i Γi ∪ {Ei , D} if Γi `P I Ei and Ei is C ∨ D and Γi ∪ {Ei , C} `P I B Γ0 = ∪ Γi is a saturated set such that Γ ⊆ Γ0 and B ∈ / Γ0 . i Let Γ be any set. We define U (Γ) = {∆ : ∆ is a SDCCS and Γ ⊆ ∆}. We will call < W, ⊆ > a canonical model structure. We define a function Pr<W, ⊆ > such that for any A 1 iff A ∈ ∆ for all ∆ ∈ U (Γ) such that Γ ⊆ ∆ Pr<W,⊆ > (A, Γ) = 0 iff ¬A ∈ ∆ for all ∆ ∈ U (Γ) such that Γ ⊆ ∆ undefined otherwise Proposition 6. Pr<W,⊆ > satisfy P.1-8 and NP.1-8 Proof: We drop the index < W, ⊆ >. P.1 If A ∈ Γ, then Pr(A, Γ) is defined. Trivial. P.2 If Pr(A, Γ) is defined, then Pr(∼ A, Γ) is defined. Trivial. P.3 If Pr(∼ A, Γ) is defined, then Pr(A, Γ) is defined. Trivial. Partial Probability Functions and Intuitionistic Logic 181 P.4 If Pr(A ∧ B, Γ) is defined, then Pr(B ∧ A, Γ) is defined. Two cases: (a) Pr(A ∧ B, Γ) = 1. A ∧ B ∈ ∆ and by I3 and I4, A ∈ ∆ and B ∈ ∆ and by PI21 and finally, B ∧ A ∈ ∆. (b) Pr(A ∧ B, Γ) = 0. We have Pr(∼ (A ∧ B), Γ) = 1 and ∼ (A ∧ B) ∈ ∆ ∈ U (Γ). By PI13, ∼ A∨ ∼ B ∈ ∆, and thus ∼ A ∈ ∆ or ∼ B ∈ ∆ by I5 and I6, in both cases ∼ B ∨ ∼ A ∈ ∆ and finally by PI22, ∼ (B ∧ A) ∈ ∆ implying that Pr(B ∧ A, Γ) = 0 . P.6 If Pr(A, Γ) = 0, then Pr(A ∧ B, Γ) = 0. If Pr(A, Γ) = 0, then ∼ A ∈ ∆ ∈ U (Γ), thus ∼ A∨ ∼ B ∈ ∆ and A ∧ B ∈ ∆. P.7 If Pr(A, Γ) = 1, then Pr(A∨B, Γ) = 1. If Pr(A, Γ) = 1, Pr(∼ A, Γ) = 0, Pr(∼ A∧ ∼ B, Γ) = 0 and Pr(A ∨ B, Γ) = 1. NP.1 0 ≤ Pr(A, Γ) ≤ 1. Trivial. NP.2 If A ∈ Γ, then Pr(A, Γ) = 1 (and is defined for all Pr and Γ). Trivial. NP.3 Pr(A ∨ B, Γ) = Pr(A, Γ) + Pr(B, Γ) - Pr(B ∧ A, Γ). Let us suppose that Pr(A ∨ B, Γ) = 1. A ∨ B ∈ S(Γ) and A ∈ ∆ or B ∈ ∆. If A ∈ ∆, we have two possibilities: (i) B ∈ ∆ or (ii) ∼ B ∈ ∆. (i) In that case, by PI21, A ∧ B ∈ ∆. (ii) In that case, ∼ A∨ ∼ B ∈ ∆ and ∼ (A ∧ B) ∈ ∆ and thus Pr(A ∧ B, Γ) = 0. In both cases, NP.3 holds. NP.4 Pr(A ∧ B, Γ) = Pr(A, Γ) × Pr(B, Γ ∪ {A}). If Pr(A ∧ B, Γ) = 1 then A ∧ B ∈ ∆ ∈ U (Γ) and A ∈ ∆, B ∈ ∆. Thus Pr(A, Γ) = Pr(B, Γ ∪ {A}) = 1. If Pr(A ∧ B, Γ) = 0 then ∼ (A ∧ B) ∈ ∆ ∈ U (Γ) and ∼ A ∈ ∆ or ∼ B ∈ ∆. In the first case Pr(A, Γ) = 0. In the second case, because Pr(A, Γ) is defined, Pr(A, Γ) = 1 and thus Pr(B, Γ ∪ {A}) = Pr(B, Γ) = 0 and ∼ B ∈ ∆. NP.5 Pr(∼ A, Γ) = 1- Pr(A, Γ) provided Γ is Pr-normal (i.e., there is at least an A such that Pr(A, Γ) = 0). Let us suppose that Pr(∼ A, Γ) = 1. This means that ∼ A ∈ ∆ ∈ S(Γ) and thus Pr(A, Γ) = 0 and 1 = 1 − 0. Let us suppose that Pr(∼ A, Γ) = 0. This means that ∼∼ A ∈ ∆ ∈ U (Γ) 182 François Lepage and thus A ∈ ∆ and Pr(A, Γ) = 1. But 0 = 1 − 1. NP.6 Pr(A ∧ B, Γ) = Pr(B ∧ A, Γ). Trivial. NP.7 Pr(C, Γ ∪ {A ∧ C}) = Pr(C, ∪{A, C}). Trivial because U (Γ ∪ {A ∧ C}) = U (Γ ∪ {A, C}) NP.8 Pr(A → B, Γ) = Pr(B, Γ ∪ {A}). We show that Pr(A → B, Γ) = 1 iff Pr(B, Γ ∪ {A}) = 1 and Pr(A → B, Γ) = 0 iff Pr(B, Γ ∪ {A}) = 0. Let us suppose that Pr(A → B, Γ) = 1. This means that A → B ∈ ∆ for all ∆ such that ∆ ∈ U (Γ). But U (Γ ∪ {A}) ⊆ U (Γ). Thus as B ∈ ∆0 for any ∆0 such that A ∈ ∆0 and ∆ ⊆ ∆0 , we have for any ∆ ∈ U (Γ ∪ {A}), B ∈ ∆. So, Pr(B, Γ ∪ {A}) = 1. Now, suppose that Pr(B, Γ ∪ {A}) = 1. This means that B ∈ ∆ for any ∆ ∈ U (Γ ∪ {A}). But, by proposition 5, if A → B ∈ / ∆, there is a ∆0 0 0 0 such that ∆ ⊆ ∆ , A ∈ ∆ and B ∈ / ∆ which contradicts the hypothesis that Pr(B, Γ ∪ {A}) = 1. Let us suppose that Pr(A → B, Γ) = 0. This means that ∼ (A → B) ∈ ∆ for all ∆ such that ∆ ∈ U (Γ). Thus A ∈ ∆ and ∼ B ∈ ∆. Thus Pr(B, Γ ∪ {A}) = Pr(B, Γ) = 0. Now, suppose that Pr(B, Γ ∪ {A}) = 0. For any ∆ ∈ U (Γ ∪ {A}), ∼ B ∈ ∆. But U (Γ ∪ {A}) ⊆ U (Γ) and any ∆ ∈ U (Γ) such that A ∈ ∆ is such that ∆ ∈ U (Γ ∪ {A}). So, A, ∼ B ∈ ∆ and thus ∼ (A → B) ∈ ∆ and Pr(A → B, Γ) = 0. Proposition 7. If Γ A then Γ ` A Proof: If Γ0A there is a SCDDS ∆ such that Γ ⊆ ∆ and A ∈ / ∆. So Pr(A, Γ) 6= 1 and Γ1A. 6. Closing remarks Intuitionistic logic was introduced by Brouwer as the logic for mathematical reasoning. A is true if there is a proof of A, otherwise it is not true. Nelson introduced a notion of falsity: among the non-truths there are falsity and propositions that are not yet true or false. Partial Probability Functions and Intuitionistic Logic 183 The probabilistic interpretation is a generalization to knowledge in general, knowledge which is not restricted to mathematical knowledge. Pr(A, Γ) can be interpreted as the subjective chance that A is the case given Γ and 1 − Pr(A, Γ) the chance that A is not the case given Γ. But sometimes, an agent can have no idea of the likelihood of A given the evidence Γ. In that case, Pr(A, Γ) is undefined and so is Pr(∼ A, Γ) is undefined. The probabilistic approach is not very intuitive for the interpretation of mathematical knowledge. One reason is that the most natural interpretation for “having a set Γ of evidence for A” is having a proof of A from the set of premises Γ and this is what show the canonical model. However, if one thinks that natural logic is intuitionistic logic, then the probabilistic approach is natural. Most of our beliefs are not absolutely certain. Worst, some propositions are so uncertain that they escape the law of excluded middle. “It will rain on Mars or it will not rain on Mars tomorrow” is an example: at least for me because I do not know if the concept of “raining on Mars” makes sense. In other words, for what I know about Mars, I cannot imagine what is an evidence for raining on Mars or not. My result gives an answer to a question that has haunted philosophers for many years. Is there a logical connective ∗ such that Pr(A ∗ B, Γ) = Pr(B, Γ ∪ {A})? We know that there is no classical truth functional connective having this property. What we have put forward is that when the probability of A → B is defined, it is Pr(B, Γ ∪ {A}). My system also has the following property. Let A be a sentence that does not contain any →. Pr(A∨ ∼ A, Γ) = Pr(A, Γ) + Pr(∼ A, Γ) − Pr(A∧ ∼ A, Γ) which is 1 when defined. This is a very nice property. A final word regarding the restriction that A should not contain any → in order to have Pr(A∨ ∼ A, Γ) = 1 when Pr(A, Γ) is defined. Just consider the following counterexample. Pr((B → C)∨ ∼ (B → C), Γ) = Pr((B → C), Γ) + Pr(∼ (B → C), Γ) − Pr((B → C)∧ ∼ (B → C), Γ) = Pr(C, Γ∪{B})+Pr(B∧ ∼ C, Γ) when defined. One can easily verify that it is ≤ 1 in general. It is 1 just in the limiting case where Pr(C, Γ ∪ {B}) = 1. These results give more plausibility to both thesis that intuitionistic logic is the general logic of discovery and that probabilistic interpretations are the semantic foundation of logic. 184 François Lepage References [1] P. H. G. Aczel, Saturated Intuitionistic Theories, Studies in Logic and the Foundations of Mathematics, Volume 50 (1968), pp. 1–11, Contributions to Mathematical Logic - Proceedings of the Logic Colloquium, Hannover 1966. [2] H. Field, H, Logic, Meaning, and Conceptual Role, The Journal of Philosophy, LXXXIV, No. 7 (1977), pp. 379–409. [3] S. Frankowski, Partial and Intuitionistic Logic, unpublished manuscript. [4] S. Kripke, Semantical analysis of intuitionistic logic I, [in:] Formal systems and recursive functions, North Holland, Amsterdam (1965), pp. 92–130. [5] S. Lapierre and F. Lepage, Completeness and Representation Theorem for Epistemic States in First-order Predicate Calculus, Logica Trianguli, vol. 3 (1999), pp. 85–111. [6] F. Lepage, Partial Monotonic Protothetics, Studia Logica 65 (2000), pp. 147–163. [7] F. Lepage and C. Morgan, Probabilistic Canonical Models for Partial Logic, Notre Dame Journal of Formal Logic 44:3 (2003), pp. 125–138. [8] C. Morgan, There is a Probabilistic Semantics for Every Extension of Classical Sentence Logic, Journal of Philosophical Logic 11 (1982), pp. 431– 442. [9] C. Morgan, Canonical Models and Probabilistic Semantics, [in:] Logic, Probability, and Science, edited by N. Shanks and Gardner R.B., Rodopi, Atlanta/Amsterdam, 2000, pp. 17–29. [10] C. Morgan and H. Leblanc, Probabilistic Semantics for Intuitionistic Logic, Notre Dame Journal of Formal Logic, Vol. 24, no 2 (1983), pp. 161–180. [11] D. Nelson, Constructible Falsity, The Journal of Symbolic Logic, Vol. 14, no 1 1949, pp. 16–26. [12] K. Popper, Logik der Forschung, Springer Verlag, Vienna, 1934. English translation The Logic of Scientific Discovery, Routledge, London, 1959. [13] G. C. E. Thijsse, Partial Logic and Knowledge Representation, Eburon Publishers, Delft, The Netherlands, 1992. Université de Montréal e-mail: francois.lepage@umontreal.ca