Carl Friedrich von Weizsäcker-Zentrum

Courses

About the Curry-Howard correspondence (Moriconi)

The course will focus on the so-called Curry-Howard Correspondence, or Isomorphism, and the associated "Proposition-as-types notion of constructivity". This is a framework which was developed by William Howard in the late Sixties of the last century, and which realized a confluence of many theoretical threads. Curry's Combinatory Logic, of course, but also Church's Lambda-Calculus, and the elaboration of the semantics for intuitionistic logic too. In the background, dominant was the role played by Hilbert, in turning the notion of "proof" into an object of mathematical study in itself, and of Gentzen, in starting the machinery of Structural Proof Theory.

The basic idea is that for each proposition in a given logic there is a type of objects, the proofs of the proposition, so that a proposition is to be considered as the type of its proofs. In other words, a type, or set, simply is the type, or set, of proofs of the proposition that labels that type, and, reciprocally, a proposition is just the type, or set, labeled by that same proposition, of its proofs. Howard was able to extend to first-order predicate logic the correspondence between the combinators and the implicational fragment of intuitionistic logic.

Topics related to this course can be found here. Here are the slides.

An introduction to type logical grammars (Catta)

Logic textbook often begin by discussing the translation of natural language expressions such as "some freshmen are intelligent" to the corresponding logical formulas such as ∃x[freshman(x) & intelligent(x)]. Even for simple sentences such as this one, the logical meaning is not always obvious: does the plural "some freshmen" imply there must be at least two intelligent freshmen for the sentence to be true, or is one enough? A translation into logic forces us to be very precise about what a sentence means.

Type logical grammars are a family of frameworks for the analysis of natural language based on logic and type theory. Such grammars are specifically designed to automate the process of translating natural language expressions into logical formulae.

Given a naturale language sentence S = w1, ..., wn, a computable function associated to each word wj a formula l(wj) of linear, non-commutative, multiplicative intuitionistic logic (e.g., the Lambek Calculus or one of its variants). The grammaticality of the sentence corresponds to the provability, in the logic in question, of the judgement l(w1), ..., l(wn) : s (s represents the grammatical category of the sentence). The same judgment can have several proofs, and each of these proofs corresponds to a distinct syntactic analysis of the sentence S. Given each one of these proofs, we can use an algorithm to uniquely associate a logical formula corresponding to one of the meanings of S.

The aim of this course is to provide a gentle introduction to type-logical grammars. We will introduce the Lambek Calculus and briefly study its proof-theoretical aspects. We will show in which sense we can test the grammaticality of a natural language sentence S by means of the Lambek Calculus. Next, we will show how each proof of the Lambek Calculus corresponds to a linear lambda term and how, from this lambda term, a logical formula corresponding to the "meaning" of S can be generated.

Topics related to this course can be found here.

Automated reasoning and proof assistants for mathematics (Koutsoukou-Argyraki)

The area of Automated Reasoning finds its early origins in Leibniz's ideas. Whitehead and Russell's efforts in the early 20th century to formalise mathematics (in the sense of expressing mathematics in symbolic logic) in their Principia Mathematica resulted in the introduction of type theory. Later on, in the late 1960's, de Bruijn developed the AUTOMATH mathematical language, a predecessor of type theoretical proof assistants. Since then, proof assistants (also known as interactive theorem provers) have come a long way. Nowadays, modern proof assistants such as Isabelle/HOL, Lean and Coq feature extensive libraries of formalised mathematics. Proof assistants are rapidly becoming increasingly popular among mathematicians and mathematics students. We will discuss the use of proof assistants to formalise mathematics referring to the state of the art and to recent developments involving the formalisation of advanced, research-level mathematics. As an example of an Isabelle/HOL formalisation, we will also see a formalisation of Aristotle's Assertoric Syllogistic.

Topics related to this course can be found here and here. Here are the slides.

Dialogical logic, old and new (McConaughey)

In the 1970s, Paul Lorenzen and Kuno Lorenz developed "dialogical logic" based on philosophical, historical, and game-theoretical considerations. It is a dynamic and pragmatist framework that takes the form of dialogues while being set in dialogical foundations. Since then, logicians have developed this framework in various directions, resulting in distinct traditions of dialogical logic. McConaughey's course will deal with the dialogical framework in the Lorenzen and Lorenz tradition.

In a first part, this course will give an overview of the grounding principles of the dialogical framework, introduce key notions, such as the distinction between a logical framework (standard, natural deduction, dialogical, etc.) and a logic (classical, intuitionistic, modal, etc.), and give a didactic tutorial of its formal aspects. Special attention will be given to the integration of various logics within this framework, resulting in the dialogical pluralism developed by Shahid Rahman and his collaborators in Lille (France). A second part of the course will look at the historical roots of the dialogical approach to logic by focusing on the logical texts of the Greek father of logic, Aristotle, and by sketching out a dialogical interpretation of his logic broadly construed, encompassing syllogistic (his logic proper), dialectic, and the theory of scientific inquiry. Finally, a third part of the course will go back to the modern dialogical framework in order to show how it can be used to reconstruct Aristotle’s assertoric syllogistic on dialogical grounds. This will require more expressive power from the dialogical framework, which is found by introducing language from Per Martin-Löf’s "constructive type theory" and thus using the dialogical variant "immanent reasoning" (Rahman, McConaughey, Klev, Clerbout).

Through its three parts, this course will deal with formal logic and philosophy of logic with a didactic presentation of the dialogical framework, how it deals with meaning and rules, how it distinguishes the definition of propositions and of proofs, and how it allows for pluralism by integrating various logics (intuitionistic, classical, propositional or first-order, modal, syllogistic). What is more, by giving an overview of Aristotle’s logic and how we arrived with the form of the Organon we now have, historical grounds will be provided for the dialogical approach to modern logic.

Topics related to this course can be found here and here. Here are the slides.

Inferentialism and its problems (Cozzo)

From the premises "Rome is north of Naples" and "Pisa is north of Rome" we draw the conclusion "Pisa is north of Naples". From the premise "it rains" we conclude "it is not the case that it does not rain". This is obvious. But why is it obvious? Why are these inferences obvious? A possible answer is: if we use the words "north" and "not", to accept the aforementioned inferences is part of grasping the meanings of these words. Every speaker who understands the words knows that the inferences in question are legitimate. Inferentialism (a term coined by Robert Brandom) is the view that to know the sense of a word is to know, implicitly, that if we use the word, certain inferences involving that word must be accepted. Those inferences are meaning-constitutive. Meaning is not explained in terms of a relation of representation between linguistic entities and an independently structured reality, but in terms of use of words in reasoning and argumentation. Since the nineteen-thirties, when Wittgenstein, Carnap and Gentzen in different ways proposed the idea, versions of inferentialism have been advocated by many philosophers. After a historical introduction, Cozzo shall give a general picture of inferentialism and of the main problems that a supporter of inferentialism has to deal with. Cozzo shall also say something about his favorite version of inferentialism.

Topics related to this course can be found here. Here are the slides.

Meaning explanations and dialogues (Klev)

In justifying a rule of inference of a logical system one must assume that the judgements serving as premisses and conclusion in any application of the rule are meaningful. (Without meaning, anything goes, and the question of justification does not arise.) In contemporary logic, meaning is usually assumed to be provided by an interpretation in a model. Such a model is, however, presented in a mathematical language whose meaning itself may be in need of explanation. For instance, a model-theoretic proposition of the form "M models t = u" is defined to mean "the interpretation of t and the interpretation of u are identical elements of M". To the question of what this definiens, in its turn, means model theory provides no answer. In effect, this is a question about the meaning of the language of set theory, and that is not a purely mathematical question, but also in part a philosophical one. A typical answer consists in the reference to some conception of set, such as the iterative conception. Per Martin-Löf's meaning explanations serve a similar role for his constructive type theory, a constructive alternative to classical set theory. His meaning explanations are, however, much more precise, and developed in much more detail, than similar projects for other formal systems.

In this course we will, firstly, try to get clear about the nature and role of these meaning explanations. The ways in which meaning explanations differ from systems of model-theoretic semantics will be emphasized. Secondly, we will look at a novel formulation of the meaning explanations in terms of dialogues. This formulation of the meaning explanations has been motivated, at least in part, by a dialogical, or interactive, account of assertion, in which the speech act of assertion is explained together with the speech act of request. The knowledge possessed by an assertor is spelled out in dialogue rules that lay down what the assertor must do in response to a request for justification. For the forms of judgement employed in type theory it turns out that these rules, when properly understood, replicate the meaning explanations, albeit in a different formulation.

Topics related to this course can be found here and here. Here are the slides.

Model-theoretic inferentialism and categoricity (Brîncus)

Logic has both a descriptive and a deductive use in mathematics. The descriptive use of logic consists in the use of logical notions for the purpose of capturing different structures studied in mathematical theories, with the aim of attaining a categorical axiomatization for a mathemathical theory S, i.e., any two models of S are isomorphic. The deductive use of logic consists in the use of logical inferences for systematizing and criticizing mathematicians reasoning about the structures they are interested in, with the aim of attaining a deductively complete system S, i.e. a system S such that for each sentence C in the language of S, either S ⊢ C or S ⊢ ~C. Both categoricity and deductive completeness are properties of the mathematical theory S formally axiomatized in a certain logical system. The underlying logical system L of S is semantically complete iff all valid formulas and all consequences can be derived in the system, i.e. if Г ⊨ C, then Г ⊢L C. Due to Gӧdel’s Incompleteness Theorem, however, if we have a categorical mathematical theory S (containing elementary arithmetic), then the underlying logic cannot be semantically complete. Thus, we can have categoricity only at the price of semantical completeness of the underlying logic. We can have either categoricity of S, or semantical completeness of L, but not both.

Logical inferentialism is the view based on the idea that formal rules of inference determine the meanings of the logical terms. In other words, if we know how to properly use the logical terms in logical inferences, then we know which their meanings are. Although the logical inferentialists agree that the meanings are fixed by the rules of inference, some of them consider that these meanings must be characterized only by using proof-theoretic concepts, such as deduction, proof, proof-theoretic conditions – these are the proof-theoretic inferentialists, while the others consider that these meanings should be characterized in model-theoretic terms, such as reference, truth-conditions, interpretations - these are the model-theoretic inferentialists. A challenge for the model-theoretic inferentialists is the existence of non-standard interpretations for the logical terms, i.e., interpretations for which the formal rules of inference preserve their soundness, but provide the logical terms with non-standard meanings. Consider for instance an interpretation of the standard classical propositional calculi that makes true all the theorems of the calculi and false all the non-theorems. In this interpretation all the rules of inference will preserve their soundness, but both ‘A’ and ‘~A’ will be false (being non-theorems) while ‘A v ~A’ will be true (being a theorem). In this interpretation a sentence and its negation are both false and a disjunction is true although both of its disjuncts are false. A model-theoretic logical inferentialist is interested in obtaining a categorical logical calculus, i.e. one that uniquely determines the intended meanings of the logical terms such that all non-standard interpretations are dismissed.

In his course, Constantin Brîncus shall first clarify the inferentialist idea that the rules of inference fix the meanings of the logical terms by showing that it depends both on the format of the logical system (axiomatic, natural deduction, and sequent calculi) and on the way validity is defined (by using deductive models, local models, and global models) and, secondly, he shall discuss some recent approaches for obtaining the categoricity of classical logic. Due to the tension between categoricity, deductive completeness and semantic completeness mentioned above, Constantin Brîncus will argue that the categoricity of a mathematical theory should not be the primary concern for the logical inferentialist and that he should accept a semantically complete logical system that uniquely determines the meanings of its logical terms.

Topics related to this course can be found here. Here are the slides.

Proof-theoretic harmony (Pistone-Tranchini)

The notion of harmony was introduced to described the perfect match that ought to hold between two centrale features of the practice of assertion. These are (i) the capability of speakers of recognising when the conditions for correctly asserting a proposition are satisfied; and (ii) the capability of correctly reacting to (or more generally drawing consequences from) assertions made by other speakers.

Although it seems indisputable that the conditions for asserting a proposition and the consequences that can be drawn from it cannot be determined independently of each other, little agreement has been found both on the significance and on the precise characterization of harmony.

In the setting of natural deduction, the introduction rules for the logical connectives express the conditions for correctly asserting logically complex sentences and the elimination rules are the consequences that can be drawn form these assertions. One may therefore expect that, at least for logically complex propositions, the notion of harmony could be made precise in this formal setting.

In the first part of this course, after introducing the notion of harmony and discussing various views with regards to its normativity, we will focus on the notion of harmony as it applies to logical connectives and present the traditional account of harmony as conservativity. In the second par of the course we will describe some more recent characterizations of this notion based on second-order intuitionistic propositional logic and we will illustrate the possibility of accounting for "intensional" features of harmony using the notion of identity of proofs.

Topics related to this course can be found here and here.