Welcome to semantics!


      This class will provide an introduction to basic semantic concepts as well as experimental methods for semantic study. Semantics is concerned with a central component of language—the communication of meaning, and yet the linguistic study of meaning remains relatively undeveloped in comparison with syntax and phonology. Philosophers have discussed the nature of meaning for thousands of years. A basic part of any semantics course introduces students to an interesting range of problems the philosophers have identified in the study of meaning. The text that I chose for the course mainly takes a philosophical approach to semantics. It includes an introduction to predicate logic and logical connectives. The hallmark of philosophical semantics is a reliance on logical argument.


      Since this is a linguistics course, we shall ask whether linguistics can provide an alternative to philosophical semantics. Linguists attempt to describe the language used by speakers in real time. Linguistic investigation is open to empirical investigation of semantic phenomena. One challenge I will put to you will be to develop an experiment that investigates some aspect of semantics.


      Read the introduction to Putnam’s book to see a philosopher in action. Putnam uses a wide range of examples from philosophy, physics and mathematics to make his points, so be sure to ask in class about any examples you don’t understand. He begins the introduction with a discussion of Turing Machines.


      At the dawn of the modern computing era (when the word ‘computer’ referred to humans who did calculations rather than machines), Alan Turing (1936) proposed the thesis that any fully specified algorithm can be realized by a simple device now known as a Turing machine. A Turing machine scans the symbols in its input, which is typically depicted as a one dimensional tape divided into squares each carrying a single symbol. After reading the symbol the machine can change the symbol on the tape, move the tape to the right or to the left, and/or change its internal state. The machine’s reaction to the input symbol will depend the machine’s internal state and the symbol being read. Turing machines are assumed to have a limited or finite number of internal states.


      I provide a transition table for a simple Turing machine in Figure 1. The Odd_or_Even machine scans a tape with the symbols 0, 1 and !, and reports if it has found an odd or even number of ones by the time it scans the ! symbol. When the machine scans a zero, it maintains its current state, but when the machine scans a one, it changes to the opposite state. While this is not an impressive result, it underlines the surprising nature of Turing’s claim that finite-state machines like our Odd_or_Even wonder can be built to compute any specified algorithm.


Figure 1. Transition table for an Odd_or_Even Turing machine


 

Input

Move

State

0

1

!

 

Even

Even

Odd

Halt

Right

Odd

Odd

Even

Halt

Right



      Any operation that can be performed by a Turing machine is said to be a computable function. More formally, a function is computable if the machine halts after a finite number of steps. Hopcroft and Ullman’s proof shows that recognizing the expressions in a regular language is a computable function. There are functions, however, that are not computable. Minsky (1967) provides a proof to show that the function that decides whether a machine will ever halt for a given input is not computable.


      Putnam identifies “functionalism” with the “computational view of the mind”and states that he is now critical of this approach to the study of mental states. We will discuss some of the terms Putnam introduces. Be sure to ask about any terms you are not familiar with!