**229,99 zł**

- Wydawca: John Wiley & Sons
- Kategoria: Nauka i nowe technologie
- Język: angielski
- Rok wydania: 2016

Written in a clear, precise and user-friendly style, **Logic as a Tool**: A Guide to Formal Logical Reasoning is intended for undergraduates in both mathematics and computer science, and will guide them to learn, understand and master the use of classical **Logic as a Tool** for doing correct reasoning. It offers a systematic and precise exposition of classical logic with many examples and exercises, and only the necessary minimum of theory.
The book explains the grammar, semantics and use of classical logical languages and teaches the reader how grasp the meaning and translate them to and from natural language. It illustrates with extensive examples the use of the most popular deductive systems -- axiomatic systems, semantic tableaux, natural deduction, and resolution -- for formalising and automating logical reasoning both on propositional and on first-order level, and provides the reader with technical skills needed for practical derivations in them. Systematic guidelines are offered on how to perform logically correct and well-structured reasoning using these deductive systems and the reasoning techniques that they employ.
*Concise and systematic exposition, with semi-formal but rigorous treatment of the minimum necessary theory, amply illustrated with examples
*Emphasis both on conceptual understanding and on developing practical skills
*Solid and balanced coverage of syntactic, semantic, and deductive aspects of logic
*Includes extensive sets of exercises, many of them provided with solutions or answers
*Supplemented by a website including detailed slides, additional exercises and solutions
For more information browse the book's website at: https://logicasatool.wordpress.com

Ebooka przeczytasz w aplikacjach Legimi na:

Liczba stron: 576

0,0

Oceny przyznawane są przez użytkowników Legimi, systemów bibliotecznych i innych serwisów partnerskich. Przyznawanie ocen premiowane jest punktami Klubu Mola Książkowego.

Cover

Title Page

Copyright

Dedication

Preface

Aims

Summary of the content and main features

For the instructor

Acknowledgements

Attributions for the photos used in the book

Introduction

An Appetizer: Logical Paradoxes and Self-Reference

Chapter 1: Understanding Propositional Logic

1.1 Propositions and logical connectives: truth tables and tautologies

Exercises

1.2 Propositional logical consequence: logically correct inferences

Exercises

1.3 Logical equivalence: negation normal form of propositional formulae

Exercises

1.4 Supplementary: Inductive definitions and structural induction and recursion

Exercises

Chapter 2: Deductive Reasoning in Propositional Logic

2.1 Deductive systems: an overview

2.2 Axiomatic systems for propositional logic

Exercises

2.3 Semantic Tableaux

Exercises

2.4 Natural Deduction

Exercises

2.5 Normal forms and Propositional Resolution

Exercises

2.6 Supplementary: The Boolean satisfiability problem and NP-completeness

2.7 Supplementary: Completeness of the propositional deductive systems

Exercises

Chapter 3: Understanding First-order Logic

3.1 First-order structures and languages: terms and formulae of first-order logic

Exercises

3.2 Semantics of first-order logic

3.3 Basic grammar and use of first-order languages

Exercises

3.4 Logical validity, consequence, and equivalence in first-order logic

Exercises

3.5 Syllogisms

Exercises

Chapter 4: Deductive Reasoning in First-order Logic

4.1 Axiomatic system for first-order logic

Exercises

4.2 Semantic Tableaux for first-order logic

Exercises

4.3 Natural Deduction for first-order logic

Exercises

4.4 Prenex and clausal normal forms

Exercises

4.5 Resolution for first-order logic

Exercises

4.6 Supplementary: Soundness and completeness of the deductive systems for first-order logic

Exercises

Chapter 5: Applications: Mathematical Proofs and Automated Reasoning

5.1 Logical reasoning and mathematical proofs

Exercises

5.2 Logical reasoning on sets, functions, and relations

Exercises

5.3 Mathematical Induction and Peano Arithmetic

Exercises

5.4 Applications: automated reasoning and logic programming

Exercises

Chapter 6: Answers and Solutions to Selected Exercises

Section 1.1

Section 1.2

Section 1.3

Section 1.4

Section 2.2

Section 2.3

Section 2.4

Section 2.5

Section 3.1

Section 3.2

Section 3.3

Section 3.4

Section 3.5

Section 4.1

Section 4.2

Section 4.3

Section 4.4

Section 4.5

Section 4.6

Section 5.1

Section 5.2

Section 5.3

Section 5.4

References

Index

End User License Agreement

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

222

223

224

225

226

227

228

229

230

231

232

233

234

235

236

237

238

239

240

241

242

243

244

245

246

247

248

249

250

251

252

253

254

255

256

257

258

259

260

261

262

263

264

265

266

267

268

269

270

271

272

273

274

275

276

277

278

279

280

281

282

283

284

285

286

287

288

289

290

291

292

293

294

295

296

297

298

299

300

301

302

303

304

305

306

307

308

309

310

311

312

313

314

315

316

317

318

319

320

321

322

323

324

325

326

327

328

329

330

331

332

333

334

335

336

337

338

339

340

341

342

343

344

345

346

347

351

352

353

354

355

356

357

358

Cover

Table of Contents

Preface

Introduction

Begin Reading

Chapter 2: Deductive Reasoning in Propositional Logic

Figure 2.1 Rules for Semantic Tableaux in propositional logic: unsigned version

Chapter 3: Understanding First-order Logic

Figure 3.1 The Square of Opposition

Figure 3.2 Three-set Venn diagram

Figure 3.3 Venn diagrams depicting the four types of categorical syllogistic propositions. From left to right: “All S are P”, “All S are not P”, “Some S are P”, and “Some S are not P”.

Figure 3.4 Venn diagrams for the syllogisms in Exercise 121(1) (left) and 121(3) (right)

Chapter 2: Deductive Reasoning in Propositional Logic

Table 2.1 Rules for Semantic Tableaux in propositional logic: signed version

Table 2.2 Rules for Natural Deduction in propositional logic. The vertical dots indicate derivations

Valentin Goranko

Stockholm University, Sweden

This edition first published 2016

© 2016 by John Wiley & Sons, Ltd

Registered office

John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom

For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com.

The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988.

All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books.

Designations used by companies to distinguish their products are often claimed as trademarks. All brand names and product names used in this book are trade names, service marks, trademarks or registered trademarks of their respective owners. The publisher is not associated with any product or vendor mentioned in this book.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. It is sold on the understanding that the publisher is not engaged in rendering professional services and neither the publisher nor the author shall be liable for damages arising herefrom. If professional advice or other expert assistance is required, the services of a competent professional should be sought.

Library of Congress Cataloging-in-Publication Data

Names: Goranko, Valentin, author.

Title: Logic as a tool : a guide to formal logical reasoning / Valentin Goranko.

Description: Chichester, UK ; Hoboken, NJ : John Wiley & Sons, 2016. | Includes bibliographical references and index.

Identifiers: LCCN 2016010458 (print) | LCCN 2016014532 (ebook) | ISBN 9781118880005 (cloth) | ISBN 9781118880050 (pdf) | ISBN 9781118880043 (epub)

Subjects: LCSH: Logic--Textbooks.

Classification: LCC BC71 .G67 2016 (print) | LCC BC71 (ebook) | DDC 511.3--dc23

LC record available at http://lccn.loc.gov/2016010458

A catalogue record for this book is available from the British Library.

This book is dedicated to those from whom I have learned\break and to those who will learn from it.

Unlike most books and textbooks on logic, this one purports to teach logic not so much as a subject to study, but rather as a tool to master and use for performing and structuring correct reasoning. It introduces classical logic rather informally, with very few theorems and proofs (which are mainly located in the supplementary sections). Nevertheless, the exposition is systematic and precise, without compromising on the essential technical and conceptual issues and subtle points inherent in logic.

This textbook covers only the core of classical logic, which itself is just the heart of the vast and growing body of modern logic. The main aims of the book are:

1.

to explain the language, grammar, meaning, and formal semantics of logical formulae, to help the reader understand the use of classical logical languages and be able both to formalize natural language statements in them and translate back from logical formulae to natural language;

2.

to present, explain, and illustrate with examples the use of the most popular deductive systems (namely, axiomatic systems, Semantic Tableaux, Natural Deduction, and Resolution with the only notable exclusion being Sequent Calculus, which is essentially inter-reducible with Natural Deduction) for mechanizing and “computing” logical reasoning both on propositional and on first-order level, and to provide the reader with the necessary technical skills for practical derivations in them; and

3.

to offer systematic advice and guidelines on how to organize and perform a logically correct and well-structured reasoning using these deductive systems and the reasoning techniques that they provide.

The structure of the book reflects the two levels of expression and reasoning in classical logic: propositional and first-order.

The first two chapters are devoted to propositional logic. In Chapter 1 I explain how to understand propositions and compute their truth values. I then introduce propositional formulae and their truth tables and then discuss logical validity of propositional arguments. The fundamental notion here is that of propositional logical consequence. Then, in Chapter 2, I present several deductive systems used for deriving logical consequences in propositional logic and show how they can be used for checking the logical correctness of propositional arguments and reasoning. In a supplementary section at the end of the chapter I sketch generic proofs of soundness and completeness of the propositional deductive systems.

The exposition of propositional logic is uplifted to first-order logic in the following two chapters. In Chapter 3 I present first-order structures and languages and then the syntax and semantics (first informally, and then more rigorously) of first-order logic. Then I focus on using first-order languages and translations between them and natural languages. In the last section of this chapter I present and discuss the fundamental semantic concepts of logical validity, consequence, and equivalence in first-order logic. Deductive systems for first-order logic are introduced in Chapter 4 by extending the respective propositional deductive systems with additional rules for the quantifiers. Derivations in each of these are illustrated with several examples. Again in a supplementary section, I sketch generic proofs of soundness and completeness of the deductive systems for first-order logic.

Chapter 5 contains some applications of classical logic to mathematical reasoning and proofs, first in general and then specifically, for sets functions, relations, and arithmetic. It consists of concise presentations of the basic theories of these, where the proofs are left as exercises. The chapter ends with applications of classical logic to automated reasoning and theorem proving, as well as to logic programming, illustrated briefly with Prolog.

The book ends with a comprehensive set of detailed solutions or answers to many of the exercises.

The special features of this book include:

concise exposition, with semi-formal but rigorous treatment of the minimum necessary theory;

emphasis both on conceptual understanding by providing many examples, and on developing technical skills and building experience by providing numerous exercises, most of them standard, yet non-trivial, as well as full solutions or answers for many of them;

solid and balanced coverage of semantic, syntactic, and deductiveaspects of logic;

some refreshing extras, such as a few logic-related cartoons scattered around, as well as many biographical boxes at the end of each section with photos and short texts on distinguished logicians, providing some background to their lives and contributions;

selected references to other books on logic, listed at the end of each section, which are suitable for further reading on the topics covered in the section; and

a supplementary website with slides, additional exercises, more solutions, and errata, which can be viewed at

https://logicasatool.wordpress.com

The textbook is intended for introductory and intermediate courses in classical logic, mainly for students in both mathematics and computer science, but is also suitable and useful for more technically oriented courses for students in philosophy and social sciences.

Dependancy chart

Some parts of the text and some exercises are much more relevant to only one of the main target audiences, and I have indicated them by using Mathematics Track and Computer Science Track markers in the text. Everything else which is not explicitly included in either of these tracks should be suitable for both groups. Likewise, some specific topics and exercises are somewhat more advanced and are indicated with an Advanced Track marker . These are, of course, only indications.

The whole book can be covered in one or two semester courses, depending on the background and technical level of the audience. It assumes almost no specific prior knowledge, except some general background in college maths for specific topics and examples, usually indicated in Mathematics or Advanced tracks. A dependency chart of the different sections is provided in the figure above.

This textbook grew out of lecture notes that I have compiled for introductory logic courses for students in mathematics and computer science since the late 1990s. Many people have contributed in various ways to this book over these years. I am grateful to former colleagues who have helped with valuable comments, technical or editorial corrections, and some solutions to exercises, including Thomas Bolander, Willem Conradie, Ruaan Kellerman and Claudette Robinson, as well as to many students at the University of Johannesburg, the University of the Witwatersrand, and the Technical University of Denmark, who have sent me useful feedback and have noticed some errors in the lecture notes from which this book has evolved.

I also thank Christopher Burke, Mike Cavers, and Andreï Kostyrka for generously allowing me to include some of their cartoons in the book.

I gratefully acknowledge Wikipedia, the Stanford Encyclopedia of Philosophy, and the MacTutor History of Mathematics archive of the School of Mathematics and Statistics University of St Andrews as the main sources of the historical and biographical information provided.

The core content of the present book is a substantially extended version of the two chapters in logic in the recently published Logic and Discrete Mathematics: A Concise Introduction (written by Willem Conradie and myself, published by Wiley). I am grateful to Wiley for the continued collaboration. In particular, I thank everyone employed or contracted by Wiley who took part in the different stages of the technical preparation of this book.

Lastly, I owe very special thanks to my life partner Nina for her personal support and understanding during the work on this book, as well as for her invaluable technical help with producing many figures and diagrams and collecting photos and references for many of the historical boxes and the cover of the book.

Most of the photos are public domain or under Creative Commons licence with free permissions. The photos of Cook, Fitting, Prawitz and Hodges are obtained from their owners or used with their permission. Attributions for the rest are listed below.

Davis

Author of photo: David Monniaux

Gentzen

Author of photo: “Eckart Menzler-Trott”, permission obtained from the MFO “Archives of the Mathematisches Forschungsinstitut Oberwolfach”

Henkin

Author of photo: George M. Bergman

Herbrand

Author of photo: Journal des Sciences

Jaśkowski

Author of photo: Alojzy Czarnecki

Kleene

Author of photo: Konrad Jacobs, Erlangen, permission obtained from the MFO “Archives of the Mathematisches Forschungsinstitut Oberwolfach”

Levin

Author of photo: Wikipedia user Sergio01

Maltsev

Author of photo: Shiltsev

Robinson

Author of photo: David Monniaux

Putnam

Author of photo: Hilary Putnam

Skolem

Author of photo: Oslo Museum: image no. OB.F06426c (Byhistorisk samling/City historic collection), via oslobilder.no

Tarski

Author of photo: George M. Bergman, permission obtained from the MFO “Archives of the Mathematisches Forschungsinstitut Oberwolfach”

Wittgenstein

Author of photo: Moritz Nähr

Zeno

Author of photo: Wikipedia user Shakko

What is logic about? What does it study and what does it offer? A usual definition found in the encyclopedia is that it is the branch of philosophy that studies the laws and rules of human reasoning. Little of this is actually correct. First, logic left the cradle of philosophy long ago and is now a truly interdisciplinary area, related and relevant not only to philosophy but also to mathematics, computer science, artificial intelligence, linguistics, social sciences, and even economics. Second, is logic really about how we reason? If that were the case, as a professional logician for many years I should already know quite well how exactly humans reason. Alas, the more experience I gain in life, the less I understand that. One thing is certain: most people use in their everyday reasoning emotions, analogies, clichés, ungrounded beliefs and superstitions, that is, everything but logic. But then, maybe logic studies the reasoning of the rational human, for whom reasoning is a purely rational brain activity? Well, while many (but far from all) of us humans reason with their brains, this is not sufficient to understand how we do it. As the American scientist Emerson M. Pugh brilliantly put it: “If the human brain were so simple that we could understand it, we would be so simple that we couldn't.”

What does logic tell us, after all, if not how we reason? A better answer is: it tells us how we can – and ideally should – reason in a systematic and well-structured way that would guarantee that we always derive true and correct conclusions, providing we only use true assumptions and only apply logically correct rules of reasoning. Logic is therefore not just concerned with what is true and what is false, but rather with the correctness of our argumentation of what implies what and with the validity of our reasoning. What exactly does all that mean? This book aims to answer this question, beginning with some food for thought here.

The famous Greek philosopher Aristotle (384–322 BC), regarded as the founding father of formal logic, was the first who systematically studied and classified logically correct and incorrect forms and rules of reasoning. Aristotle studied specific patterns of arguments called syllogisms. Here is a typical example (mine, not Aristotle's) of a syllogism:

The way we actually read this is as follows.

If all logicians are clever and all clever people are rich, then all logicians are rich.

This sounds intuitively like a correct piece of reasoning, and it is, but it does not mean that the conclusion is necessarily true. (In fact, unfortunately, it is not.) What then makes it correct?

Here is another example:

Note that all statements above are true. So, is this a correct argument? If you think so, then how about taking the same argument and replacing the words “natural numbers” by “mice,” “integers” by “animals,” and “odd numbers” by “elephants.” This will not change the logical shape of the argument and, therefore, should not change its logical correctness. The result speaks for itself, however.

So what makes an argument logically correct? You will also find answers to this question in this book.

Let me say a few more concrete words about the main aspects and issues of classical logic treated in this book. There are two levels of logical discourse and reasoning in classical logic. The lower level is propositional logic, introduced and discussed in the first two chapters of this book, and the higher level is first-order logic, also known as predicate logic, treated in the rest of the book.

Propositional logic is about reasoning with propositions, sentences that can be assigned a truth value of either true or false. They are built from simple, atomic propositions by using propositional logical connectives. The truth values propagate over all propositions through truth tables for the propositional connectives.

Propositional logic can only formalize simple logical reasoning that can be expressed in terms of propositions and their truth values, but it is quite insufficient for practical knowledge representation and reasoning. For that, it needs to be extended with several additional features, including constants (names) and variables for objects of any nature (numbers, sets, points, human beings, etc.), functions and predicates over objects, as well as quantifiers such as “for all objects ,” and “there exists an object such that .” These lead to first-order languages, which (in many-sorted versions) are essentially sufficient to formalize most common logical reasoning. Designing appropriately expressive logical languages and using them to capture fragments of natural languages and reasoning is one of the main tasks of modern logic.

There are three major aspects of a logical system: semantic; syntactic; and deductive. The former deals mostly with the semantic notions of truth, validity and logical consequence, whereas the latter two deal respectively with the syntax and grammar of logical languages and with systems for logical deduction and derivations and deductive consequences. Deductive systems are purely mechanical procedures designed to derive (deduce) logical validities and consequences by means of formal rules of inference and possibly some postulated derived formulae called axioms. Thus, a deductive system does not refer explicitly to the meaning (semantics) of the formulae but only treats them as special strings of symbols and acts on their shape (syntax). In principle, a deductive system can be used successfully without any understanding of what formulae mean, and derivations in a deductive system can be performed not only by humans but also by artificial “agents” or computers. However, deductive systems are always meant to capture (or even determine) logical consequence so, ideally, semantic logical consequence and deductive consequence should precisely match each other. If that is the case, we say that the deductive system is sound and complete, or just adequate. Design and study of adequate and practically useful deductive systems is another major logical task.

The main syntactic, semantic, and deductive aspects of classical logic are discussed in detail in the book; there is much more that is not treated here however, both inside and outside of classical logic. In particular, logic is deeply related to: the foundations of mathematics, via axiomatic theories of sets; mathematics itself via model theory; the important notions of algorithmic decidability and computability via recursion theory; and the fundamentals and limitations of the deductive approach via proof theory. All of these are major branches of logic that I will only mention briefly in the text, but much more can be seen in the references. Furthermore, there is a rich variety of other, more specialized non-classical logical languages and systems that are better suited for specific modes and aspects of reasoning, such as intuitionistic, modal, temporal, epistemic, deontic, and non-monotonic logics that will not (except briefly intuitionistic logic) be discussed at all in this book. References to relevant publications covering these topics are provided throughout.

Finally, a few final words on the role of logic in the modern world. As I mentioned earlier, contemporary logic has become a highly interdisciplinary area with fundamental applications to a wide variety of scientific fields including mathematics, philosophy, computer science, artificial intelligence, and linguistics. Today logic not only provides methodology for correct human reasoning, but also techniques and tools for automated reasoning of intelligent agents. It also provides theoretical foundations for basic concepts in computer science such as computation and computability, algorithms and complexity, and semantics of programming languages, as well as practical tools for formal specification, synthesis, analysis, and verification of software and hardware, development and management of intelligent databases, and logic programming. The impact of logic on computer science nowadays is often compared to the impact of differential and integral calculus on natural sciences and engineering from the 17th century.

I end this introduction with a humble hope that this book will help the reader understand and master the use of this great intellectual tool called Logic. Enjoy it!

Valentin GorankoStockholm, November 2015

Dear reader,

The sentence that you are reading now is not true.

Is this claim true or false? If true, then it truly claims that it is not true, so it can't be true. But then, it is not true that it is not true, it must be true! Or …?

This is a version of probably the oldest known logical paradox since antiquity, also known as the liar's paradox which refers to the quote “I am lying now.”

What is a logical paradox? It is a statement or an argument that presents an apparent logical contradiction, either with well-known and accepted truths or simply with itself. Unlike a fallacy, a paradox is not due to an incorrect reasoning, but it could be based on wordplay or on a subtle ambiguity in the assumptions or concepts involved. Most commonly however, logical paradoxes arise when using self-reference, such as in the opening sentence above. Logicians love playing with self-reference. For instance, I have added this sentence in order to make a reference to itself. And, this one, which does not make a reference to itself. (Or, does it …?)

A variation of the liar's paradox is Jourdain's card paradox, which does not rely on immediate self-reference but on a circular reference. Here is a simple version:

The next sentence is true. The previous sentence is false.

I end this appetizer two more paradoxes which are not exactly logical but semantic, again a self-referential play but now with natural language.

The first is known as Berry's paradox. Clearly every natural number can be defined in English with sufficiently many words. However, if we bound the number of words to be used, then only finitely many natural numbers can be defined. Then, there will be numbers that cannot be defined with that many words. Hence, there must be a least so undefinable natural number. Now, consider the following sentence “The least natural number that is not definable in English with less than twenty words.” There is a uniquely determined natural number that satisfies this description, so it is a definition in English, right? Well, count how many words it uses.

The second is the Grelling–Nelson paradox. Divide all adjectives into two groups: autological, if and only if it describes itself, such as “English,” “short,” and “fourteen-letter;” and heterological, if and only if it does not describes itself, such as “wet,” “white,” and “long.” Now, is the adjective “heterological” autological or heterological?

Propositional logic is about reasoning with propositions. These are sentences that can be assigned a truth value: true or false. They are built from primitive statements, called atomic propositions, by using propositional logical connectives. The truth values propagate over all propositions through truth tables for the propositional connectives. In this chapter I explain how to understand propositions and compute their truth values, and how to reason using schemes of propositions called propositional formulae. I will formally capture the concept of logically correct propositional reasoning by means of the fundamental notion of propositional logical consequence.

The basic concept of propositional logic is proposition. A proposition is a sentence that can be assigned a unique truth value: true or false.

Some simple examples of propositions include:

The Sun is hot.

The Earth is made of cheese.

2 plus 2 equals 22.

The 1000th decimal digit of the number is 9.

(You probably don't know whether the latter is true or false, but it is surely either true or false.)

The following are not propositions (why?):

Are you bored?

Please, don't go away!

She loves me.

is an integer.

This sentence is false.

Here is why. The first sentence above is a question, and it does not make sense to declare it true or false. Likewise for the imperative second sentence. The truth of the third sentence depends on who “she” is and who utters the sentence. Likewise, the truth of the fourth sentence is not determined as long as the variable is not assigned a value, integer or not. As for the last sentence, the reason is trickier: assuming that it is true it truly claims that it is false – a contradiction; assuming that it is false, it falsely claims that it is false, hence it is not false – a contradiction again. Therefore, no truth value can be consistently ascribed to it. Such sentences are known as self-referential and are the main source of various logical paradoxes (see the appetizer and Russell's paradox in Section 5.2.1).

The propositions above are very simple. They have no logical structure, so we call them primitive or atomic propositions. From primitive propositions one can construct compound propositions by using special words called logical connectives. The most commonly used connectives are:

not

, called

negation

, denoted ;

and

, called

conjunction

, denoted (or sometimes );

or

, called

disjunction

, denoted ;

if then

, called

implication

, or

conditional

, denoted ;

…if and only if …

, called

biconditional

, denoted .

It is often not grammatically correct to read compound propositions by simply inserting the names of the logical connectives in between the atomic components. A typical problem arises with the negation: one does not say“Not the Earth is square.”A uniform way to get around that difficulty and negate a propositionis to say “It is not the case that.”

In natural language grammar the binary propositional connectives, plus others like but, because, unless, although, so, yet, etc. are all called “conjunctions” because they “conjoin”, that is, connect, sentences. In logic we use the propositional connectives to connect propositions. For instance, given the propositions

“Two plus two equals five” and “The Sun is hot”

we can form the propositions

“

It is

not

the case that two plus two equals five

. ”

“

Two plus two equals five

and

the Sun is hot

.”

“

Two plus two equals five

or

the Sun is hot

.”

“

If

two plus two equals five

then

the Sun is hot

.”

“

Two plus two equals five

if and only if

the Sun is hot

.”

For a more involved example, from the propositions (we assume we have already decided the truth value of each)

“Logic is fun”, “Logic is easy”, and “Logic is boring”

we can compose a proposition

“Logic is not easy or if logic is fun then logic is easy and logic is not boring.”

It sounds better smoothed out a bit:

“Logic is not easy or if logic is fun then it is easy and not boring.”

How about the truth value of a compound proposition? It can be computed from the truth values of the components1 by following the rules of ‘propositional arithmetic’:

The propositionis true if and only if

the propositionis false.

The propositionis true if and only if

bothandare true.

The propositionis true if and only if

either ofor(possibly both) is true.

The propositionis true if and only if

is false oris true, that is, if the truth ofimplies the truth of.

The propositionis true if and only if

A andhave the same truth values.

We can systematize these rules in something similar to multiplication tables. For that purpose, and to make it easier for symbolic (i.e., mathematical) manipulations, we introduce a special notation for the two truth values by denoting the value true by T and the value false by F. Another common notation, particularly in computer science, is to denote true by and false by .

The rules of the “propositional arithmetic” can be summarized by means of the following truth tables ( and below represent arbitrary propositions):

The use and meaning of the logical connectives in natural language does not always match their formal logical meaning. For instance, quite often the conjunction is loaded with a temporal succession and causal relationship that makes the common sense meanings of the sentences “The kid threw the stone and the window broke” and “The window broke and the kid threw the stone” quite different, while they have the same truth value by the truth table of the conjunction. Conjunction in natural language is therefore often non-commutative, while the logical conjunction is commutative. The conjunction is also often used to connect not entire sentences but only parts, in order to avoid repetition. For instance “The little princess is clever and beautiful” logically means “The little princess is clever and the little princess is beautiful.” Several other conjunctive words in natural language, such as but, yet, although, whereas, while etc., translate into propositional logic as logical conjunction.

The disjunction in natural language also has its peculiarities. As for the conjunction, it is often used in a form which does not match the logical syntax, as in “The old stranger looked drunk, insane, or completely lost”. Moreover, it is also used in an exclusive sense, for example in “I shall win or I shall die”, while in formal logic we use it by convention in an inclusive sense, so “You will win or I will win” will be true if we both win. However, “exclusive or”, abbreviated Xor, is sometimes used, especially in computer science. A few other conjunctive words in natural language, such as unless, can translate into propositional logic as logical disjunction, for instance “I will win, unless I die.” However, it can also equivalently translate as an implication: “I will win, if I do not die.”

Among all logical connectives, however, the implication seems to be the most debatable. Indeed, it is not so easy to accept that a proposition such as “If 2+2=5, then the Moon is made of cheese”, if it makes any sense at all, should be assumed true. Even more questionable seems the truth of the proposition “If the Moon is made of chocolate then the Moon is made of cheese.” The leading motivation to define the truth behavior of the implication is, of course, the logical meaning we assign to it. The proposition means:

Note that if is not true, then the (truth of the) implication requires nothing regarding the truth of . There is therefore only one case where that proposition should be regarded as false, namely when is true, and yet is not true. In all other cases we have no reason to consider it false. For it to be a proposition, it must be regarded true. This argument justifies the truth table of the implication. It is very important to understand the idea behind that truth table, because the implication is the logical connective which is most closely related to the concepts of logical reasoning and deduction.

It helps to think of an implication as a promise. For instance, Johnnie's father tells him: “If you pass your logic exam, then I'll buy you a motorbike.” Then consider the four possible situations: Johnnie passes or fails his exam and his father buys or does not buy him a motorbike. Now, see in which of them the promise is kept (the implication is true) and in which it is broken (the implication is false).

Some terminology: the proposition in the implication is called the antecedent and the proposition is the consequent of the implication.

The implication can be expressed in many different but “logically equivalent” (to be defined later) ways, which one should be able to recognize:

implies

.

follows from

.

If

.

if

.

only if

.

whenever

.

is sufficient for.

(Meaning: The truth ofis sufficient for the truth of.)

is necessary for.

(Meaning: The truth ofis necessary forto be true.)

It can be seen from the truth tables that the truth value of a compound proposition does not depend on the meaning of the component propositions, but only on their truth values. To check the truth of such a proposition, we merely need to replace all component propositions by their respective truth values and then “compute” the truth of the whole proposition using the truth tables of the logical connectives. It therefore follows that

“It is not the case that two plus two equals five”

is true;

“Two plus two equals five and the Sun is hot”

is false;

“Two plus two equals five or the Sun is hot”

is true; and

“If two plus two equals five, then the Sun is hot”

is true (even though it does not make good sense).

For the other example, suppose we agree that

“Logic is fun” is true,

“Logic is boring” is false,

“Logic is easy” is true.

Then the truth value of the compound proposition

“Logic is not easy or if logic is fun then it is easy and not boring.”

can be determined just as easily. However, in order to do so, we first have to analyze the syntactic structure of the proposition, that is, to determine how it has been composed, in other words in what order the logical connectives occurring therein have been applied. With algebraic expressions such as that analysis is a little easier, thanks to the use of parentheses and the established priority order among the arithmetic operations. We also make use of parentheses and rewrite the sentence in the way (presumably) we all understand it:

“(Logic is not easy) or ((if logic is fun) then ((logic is easy) and (logic is not boring))).”

The structure of the sentence should be clear now. We can however go one step further and make it look exactly like an algebraic expression by using letters to denote the occurring primitive propositions. For example, let us denote

“Logic is fun” ,

“Logic is boring” , and

“Logic is easy” .

Now our compound proposition can be neatly rewritten as

In our rather informal exposition we will not use parentheses very systematically, but only whenever necessary to avoid ambiguity. For that purpose we will, like in arithmetic, impose a priority order among the logical connectives, namely:

the negation has the strongest binding power, that is, the highest priority;

then come the conjunction and disjunction;

then the implication; and

the biconditional has the lowest priority.

The propositionis a simplified version of.

The last step is to compute the truth value. Recall that is not the actual meaning of the component propositions that matters but only their truth values, so we can simply replace the atomic propositions , and by their truth values and perform the formal computation following the truth tables step-by-step:

So, logic is easy after all! (At least, so far.)

If we only discuss particular propositions our study of logic would be no more useful than a study of algebra based on particular equalities such as or . Instead, we should look at schemes of propositions and their properties, just like we study algebraic formulae and equations and their properties. We call such schemes of propositions propositional formulae.

I first define a formal language in which propositional formulae, meant to be templates for composite propositions, will be special words. That language involves:

propositional constants

: special fixed propositions , that always takes a truth value

true

, and , that always takes a value

false

;

propositional variables

, possibly indexed, to denote unspecified propositions in the same way as we use algebraic variables to denote unspecified or unknown numbers;

the

logical connectives

that we already know; and

auxiliary symbols

: parentheses and are used to indicate the order of application of logical connectives and make the formulae unambiguous.

Using these symbols we can construct propositional formulae in the same way in which we construct algebraic expressions from variables and arithmetic operations. Here are a few examples of propositional formulae:

There are infinitely many possible propositional formulae so we cannot list them all here. However, there is a simple and elegant way to give a precise definition of propositional formulae, namely the so-called inductive definition (or recursive definition). It consists of the following clauses or formation rules:

1.

Every propositional constant or variable is a propositional formula.

2.

If is a propositional formula then is a propositional formula.

3.

If are propositional formulae then each of , , , and is a propositional formula.

We say that a propositional formula is any string of symbols that can be constructed by applying – in some order and possibly repeatedly – the rules above, and only objects that can be constructed in such a way are propositional formulae.

Note that the notion of propositional formula that we define above is used in its own definition; this is the idea of structural induction. The definition works as follows: the first rule above gives us some initial stock of propositional formulae; as we keep applying the other rules, we construct more and more formulae and use them further in the definition. Eventually, every propositional formula can be obtained in several (finitely many!) steps of applying these rules. We can therefore think of the definition above as a construction manual prescribing how new objects (here, propositional formulae) can be built from already constructed objects. I discuss inductive definitions in more detail in Section 1.4.5.

From this point, I omit the unnecessary pairs of parentheses according to our earlier convention whenever that would not lead to syntactic ambiguity.

The formulae that are used in the process of the construction of a formula are called subformulae of. The last propositional connective introduced in the construction of is called the main connective of and the formula(e) to which it is applied is/are the main subformula(e) of. I make all these more precise in what follows.

One construction sequence for the formula

is

For instance, the subformulahas main connective (theonly occurrence of) in it, and its main subformulae areland; the first occurrence ofis the main connective ofand its only main subformula is; and the only occurrence ofis the main connective of the whole formula, the main subformulae of which areand.

A sequence of formulae constructed in the process of applying the definition and ending with is called a construction sequence of a formula. A formula has many construction sequences and a construction sequence may contain many redundant formulae. A better notion for capturing the construction of a formula is the construction tree of that formula. A construction tree is a tree-like directed graph with nodes labeled with propositional constants, variables, and propositional connectives, such that:

1.

Every leaf is labeled by a propositional constant or variable.

2.

Propositional constants and variables label only leaves.

3.

Every node labeled with has exactly one successor node.

4.

Every node labeled with any of or has exactly two successor nodes:

left

and

right

successor.

A construction tree therefore looks like:

Every construction tree defines a formula , built starting from the leaves and going towards the root, by applying at every node the formula construction rule corresponding to the label at that node. The formulae constructed in the process are precisely the subformulae of , and the propositional connective labeling the root of the construction tree of a formula is the main connective of .

The formulahas the following construction tree:

The parsing tree of a formula looks the same as its construction tree but is produced in inverse order, starting from the main connective (if any), drawing edges to the main components, and then recursively producing the parsing trees for each of them.

A propositional formula is a scheme that becomes a proposition whenever we substitute propositions for all occurring propositional variables. I, of course, mean uniform substitutions, that is, the same variables are replaced by the same propositions throughout the formula.

We cannot attribute a truth value to a propositional formula before we assign concrete propositions to all occurring propositional variables, for the same reason that we cannot evaluate before we have assigned values to . However, remember that in order to evaluate the truth value of a compound proposition, we only need to know the truth values of the occurring atomic propositions and not the propositions themselves.

For instance, if we substitute the propositions “0.5 is an integer” for , “2 is less than 3” for , and “the Moon is larger than the Sun” for in the propositional formula

we find that the resulting proposition is true:

If, however, we substitute any true propositions for and and a false proposition for , then the resulting proposition will be false:

A function that assigns truth values to propositionalvariables in a given set is called truth assignmentfor that set of variables. If a truth assignmentrenders a formulatrue, we say thatsatisfies, denoted. A propositional formula is satisfiableif it is satisfied by some truth assignment.

For instance, the formula is satisfiable by the assignment , , , while the formula is not satisfiable.

Clearly, the truth of a given propositional formula only depends on the truth values assigned to the variables occurring in that formula. We can therefore think of propositional formulae as functions from truth assignments to truth values. We can tabulate the “behavior” of any propositional formula in a truth table where we list all possible truth assignments of the occurring variables, and for each of them we compute the corresponding truth value of the formula. We can do that by successively computing the truth values of all occurring subformulae, as we did just now. For example, the truth table for the above formula is compiled as follows:

Complete the last two rows.

Truth tables can be somewhat simplified if we notice that every occurrence of a logical connective in a propositional formula determines a unique subformula where that occurrence is the main connective of that subformula. We can now simplify the truth table of the formula by listing only the occurring variables and the whole formula, and then computing the truth table of every subformula in the column below the corresponding main connective:

⋯

⋯

We therefore see that a propositional formula can represent a true or a false proposition, depending of the choice of the propositions substituted for the occurring propositional variables, or rather on their truth values.

A propositional formulais a tautologyif it obtains a truth value T for any assignment of truth values to the variables occurring in.

The claim thatis a tautology is denoted

Tautologies are also called (logically) valid formulae.

Thus, tautology always renders a true proposition; it represents a logical law in the same way as the identity represents a law of arithmetic and, therefore, holds no matter what values we assign to and .

Here are a few simple tautologies that represent some important features of propositional logic:

: the

law of excluded middle

, which states that every proposition is either true or false (and, in the latter case, must be true);

: the

law of non-contradiction

, which states that it cannot be the case that both and are true;

: this is always true by the very meaning (and truth table) of ;

likewise, is always true by the very meaning and truth table of ; and

: if is true and it is true that implies , then is true. This reflects the meaning of the implication.

How can we determine if a propositional formula is a tautology? Quite easily: complete the truth table of and check if it always takes a truth value true. Let us check some of the examples mentioned above:

The opposite concept of a tautology is a contradictory formula, that is, a formula that always takes a truth value false. For example, is a contradictory formula. A formula that is not contradictory is called falsifiable.

How are the concepts of tautology and contradictory formula, and the concepts of satisfiable and falsifiable formula related?

1.

A formula is a tautology precisely when its negation is a contradictory formula, and is contradictory precisely when its negation is a tautology.

2.

A formula is satisfiable precisely when its negation is falsifiable, and is falsifiable precisely when its negation is satisfiable.

Checking tautologies with truth tables is straightforward but rather laborious and – let's admit – not very exciting. There is a somewhat streamlined and more intelligent method whereby we attempt to show that the formula is not a tautology by searching for an appropriate falsifying truth assignment, that is, a combination of truth values of the propositional variables that renders it false. If we succeed in finding such an assignment, then the formula is indeed not a tautology. If, however, when exploring a possible case we reach a state where some variable is required to be both true and false, that is clearly a contradiction, meaning that the case we are exploring is impossible and we must abandon that case. If all possible attempts to produce a falsifying assignment end up with a contradiction, then we have actually proved that the formula cannot be falsified; it must therefore be a tautology.2

The systematic search for a falsifying assignment is based on a step-by-step decomposition of the formula by using the truth tables of the propositional connectives occurring in it. Let us see how this works on some examples.

To make it more succinct I will use signed formulae, that is, expressions of the type , meaning “must be true”, and , meaning “must be false”.

1.

Consider the formula .

To falsify it, it must be the case that T and .

For the former, it must be the case that , hence and .

For the latter, it must be the case that and .

This implies that must be both true and false, which is impossible. Our attempt to falsify the formula has failed, and it is therefore a tautology.

2.

Consider now .

To falsify it, it must be the case that while .

Therefore and .

For the latter, or . Let us consider both cases:

Case 1: . This contradicts .

Case 2: . Then . In this case we have not reached any contradiction and there is nothing more we can do in order to obtain one. Indeed, we can check that and renders the formula false.

3.

A contradiction in the truth values can be reached on any

subformula

, not necessarily a variable. For instance, take . For it to be false, and must be the case. From the latter, and , which contradicts the former.

4.

Finally, take . For it to be false, there are two possible cases:

Case 1: and . The latter implies and , hence , , . For the former, two sub-cases are possible:

Case 1a: . Then , which is a contradiction with .

Case 1b: . Then:

Case 1bi: , a contradiction with .

Case 1bii: and , again a contradiction but now with .

Note that the consideration of these sub-cases could have been avoided, if we had noticed that the assignment , , renders .

Case 2: and . The former implies and , that is, , , . Again, we can either notice that this assignment renders , or consider the cases for and see that all lead to a contradiction.

The formula cannot be falsified in either case, so it is a tautology.

This method can be formalized and mechanized completely into a kind of deductive system, called Semantic Tableaux, which is presented in Section 2.3.

For helpful and accessible introductions to propositional logic and discussions of the issues covered here plus more, see Tarski (1965), Gamut (1991), Jeffrey (1994), Barwise and Echemedy (1999), Nederpelt and Kamareddine (2004), Boole (2005), Bornat (2005), Hodges (2005), Chiswell and Hodges (2007), Makinson (2008), Ben-Ari (2012), and van Benthem et al. (2014).

Some suitable books on philosophical logic, old and new, include Carroll (1897), Kalish and Montague (1980), Smith (2003), Copi et al. (2010), and Halbach (2010).

For more technical books on mathematical logic, see Shoenfield (1967), van Dalen (1983), Hamilton (1988), Mendelson (1997), Enderton (2001), and Hedman (2004),

For books on computational and applied logic the reader is referred to Gallier (1986), Nerode and Shore (1993), and Fitting (1996).

Last but not least, some fun logic books include Carroll (1886) and Smullyan (1998, 2009a, b, 2011, 2013, 2014).

1.1.1

Which of the following are propositions? (Assume that John, Mary and Eva are concrete individuals.)

(a)

(b)

(c)

(d) Will you marry me?

(e) John married on 1 January 1999.

(f) John must marry Mary!

(g) I told her about John.

(h) Mary is not happy if John married Eva.

(i) Who is Eva?

(j) Mary is not happy if .

(k) This sentence refers to itself.

(l) This sentence is true.

(m) If you are reading this sentence now, then it is not true.

1.1.2

If and are true propositions and and are false propositions, determine the truth values of the following compound propositions without using truth tables.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

1.1.3

Determine the antecedent and the consequent in each of the following implications.

(a) Whenever John talks everyone else listens.

(b) Everyone else listens if John talks.

(c) John talks only if everyone else listens.

(d) If everyone else listens, John talks.

(e) An integer is positive if its cube is positive.

(Hint: To make it easier to reason, introduce a name for the object in question, for example “An integer is positive if the cube of is positive.”)

(f) An integer is positive only if its cube is positive.

(g) A function is continuous whenever it is differentiable.

(h) The continuity of a function is necessary for it to be differentiable.

(i) The continuity of a function is sufficient for it to be differentiable.

1.1.4

A positive integer is called

prime

if and is divisible only by 1 and by itself. Which of the following conditions are sufficient and which are necessary for the truth of “ is not prime”, where is some (given) positive integer?

(a) is divisible by 3.

(b) is even.

(c) is divisible by 6.

(d) has at least two different factors.

(e) has more than two different factors.

(f) .

(g) has a factor different from .

(h) has a prime factor.

(i) has a prime factor different from .

1.1.5

Write each of the following composite propositions in a symbolic form by identifying its atomic propositions and logical structure. Then determine its truth value.

(a) The Earth rotates around itself and, if the Moon rotates around the Earth, then the Sun rotates around the Moon.

(b) If the Sun rotates around the Earth or the Earth rotates around the Moon then the Sun rotates around the Moon.

(c) The Moon does not rotate around the Earth if the Sun does not rotate around the Earth and the Earth does not rotate around the Moon.

(d) The Earth rotates around itself only if the Sun rotates around the Earth or the Moon does not rotate around the Earth.

(e) The Earth rotates around itself if and only if the Moon does not rotate around the Earth or the Earth does not rotate around the Moon.

1.1.6

Determine the truth value of the proposition in each of the following cases, without using truth tables. (Hint: if necessary, consider the possible cases.)

(a) and are true.

(b) is true and is false.

(c) and are true.

(d) Each of , , and is true.

(e) Each of , , and is true.

1.1.7

Let , and be propositions. Show that:

(a) If the propositions and are true, then is true.

(b) If the propositions and are true, then is true.

(c) If , and are true, then is true.

(d) If are true, then is false.

(e) If , , and are true, then is false.

(f) If is true and is false, then is false.

(g) If and are true and is false, then is true.

(h) If and are true, then is true.

(i) If and are true and is false, then is true.

(j) If is true and is false, then is false.

1.1.8

Construct the truth tables of the following propositional formulae, and determine which (if any) of them are tautologies and which are contradictory formulae.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

1.1.9

Determine which (if any) of the following propositional formulae are tautologies by searching for falsifying truth assignments.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

(m)

(n)

1.1.10

Lastly, some logical puzzles

3

. On the remote planet Nologic there are two types of intelligent creatures:

truth-tellers

, who always tell the truth, and

liars

, who (you guessed it) always lie.

It is not possible to distinguish them by appearance, but only by the truth or falsity of the statements they make.

(a) A space traveler visited Nologic and met two inhabitants, P and Q. He asked them: “Is any of you a liar?” “At least one of us is a liar”, replied P. Can you find out what P and Q are?

(b) Next, the stranger met two other inhabitants and asked one of them “Is any of you a liar?”. He got a “yes” or “no” answer and from that answer was able to determine for each of them whether he is a liar or not. What was the answer and what was the stranger's conclusion?