Kinds of Meaning


Kearns begins with a discussion of the difference between semantics and pragmatics. Both of these linguistic fields study meaning, but approach the topic from different directions. Semantics approaches meaning from a language internal perspective—the literal meaning of words and sentences. Pragmatics approaches meaning from a language external perspective—the ways in which words and sentences relate to the external context.


INTERNAL PERSPECTIVE

Literal Meaning

>

INTERPRETATION

<

EXTERNAL PERSPECTIVE

Nonlinguistic Context


Think about the various meanings of the word paper.


How would context help you decide which of these potential meanings would be useful in interpreting the sentence I forgot the paper?



1.1.1 Denotation and Sense


 

s-chair.jpg

 

cv_carb.jpg

 


Ask anyone about the meaning of a common noun such as chair or carburetor and chances are they will point to an instance of such an object if it is handy. This type of ostensive definition satisfies our intuitions about words mean by pointing to something the word denotes. The first question to ask is whether such denotations are all there is to word meaning.


Kearns offers a second way to point to the meaning of words through definition. Dictionaries make use of definitions to suggest the meaning of words.


The American Heritage College Dictionary provides the following definitions:


opossum

1. Any of various nocturnal, usually arboreal marsupials of the family Didelphidae, ... of the Western Hemisphere, having a thick coat of hair, a long snout, and a long prehensile tail.

2. Any of several similar marsupials of Australia belonging to the family Phalangeridae.


diapir

An anticlinal fold in which a mobile core, such as gypsum, has pierced through the more brittle overlaying rock.


As Kearns states, dictionary definitions attempt to match the sense of a word with the sense of another expression. What shortcomings do you see in these definitions for opossum and diapir? Would you feel confident pointing to a denotation based on these definitions?


 

opossum.jpg

 

diapir.gif

 


The main point that Kearns is making is that there are two fundamental aspects to word meaning: sense and reference. Sense points to the concept that words express while reference points to what the words denote. While these two aspects are closely related, some examples suggest they are not equivalent. Think about Kearns’ Mr. Muscle Beach example.


Kearns contrasts encyclopedic information about denotations with the dictionary meanings of words. Encyclopedic knowledge corresponds to the knowledge experts in the field command about objects while dictionary meaning corresponds to the everyday knowledge that language speakers control about word meaning. An important issue in semantics is the relation between encyclopedic knowledge and dictionary meaning. They depend on each other, but they are not equivalent. How do ordinary speakers use such words as elm and beech successfully without controlling encyclopedic knowledge about their denotations?



1.1.2 Lexical and Structural Meaning


Another basic distinction in semantics is the contrast between lexical meaning and sentence meaning. We know that sentence structure makes a contribution to sentence meaning as seen in Kearns’ examples:


3a. The rat that bit the dog chased the cat.

  b. The cat that chased the dog bit the rat.


These sentences are made out of the same words, but put together in different ways. The differences tell a speaker of English what bit what and what chased what.


English speakers do not find other word orders to be equally unambiguous, e.g.,

            Chased the dog the cat.


We use the combination of word meanings and sentence structure to compose the meanings of sentences and larger units of discourse. One of the goals of linguistic semantics is to understand how speakers construct the compositional meanings of sentences.



1.1.3 Categorematic and Syncategorematic Expressions


Linguists make a distinction between lexical and functional categories. The distinction between categorematic and syncategorematic expressions is similar. Categorematic expressions have meanings which can be grouped into categories while the meanings of syncategorematic expressions specify relations between categories.


Kearns suggests that adjectives such as blue are categorematic because they pick out the set of blue things. Tense morphemes, on the other hand, are syncategorematic because tense serves to locate an event or state with respect to the time of utterance.


The universal quantifier all is another example of a syncategorematic expression. Quantifiers express different relationships between sets. The universal quantifier expresses the inclusion relationship between two sets.


 

setincl.gif

 



All As are B

Syncategorematic elements make important contributions to the structural meaning of sentences.



1.2.1 Lexical Sense


Semantic relations provide valuable clues to meanings or senses of words. Rather than trying to define of the meaning of a word in isolation, you can check to see how the meaning of a word contrasts with the meaning of another word. Antonyms provide a familiar example of a sense relation. There are different types of antonymic relations.


Complementaries divide the world into two categories, e.g., open/shut, dead/alive, true/false.


 

Night 

Day


We see that complementaries are not acceptable in certain sentences:


11a. # The door is neither open nor closed.

    b. # He shot at the target and he neither hit it or missed it.

    c. # The dog is neither dead nor alive.


Non-complementary words allow for intermediate categories

 

Bad

Average

Good


Non-complementary words form acceptable sentences with neither:


12a The water is neither hot nor cold.

    b. The performance was neither good nor bad.

    c. He is neither short or tall.


Horn (2001:270) diagrams the difference between complementaries and non-complementaries as:


 

 

 

Non-complementary Opposition

 

 

 

 

 

not-F

 

Complementary Opposition

 

not-G

 

 

F

G

 

F

not F and not G

G

 

black

nonblack

 

black

 

white

 

odd

even

 

bad

 

good

 

male

female

 

sad

 

happy

 


Horn (271) observes:


In his seminal investigation of gradable terms, Sapir (1944:133) points to the existence of a ‘psychological excluded middle’: ‘Three-term sets [superior/average/inferior, good/moderate/bad, big/medium/small, warm/lukewarm/cool] do not easily maintain themselves because psychology, with its tendency to simple contrast, contradicts exact knowledge, with its insistence on the norm, the “neither nor”’. It is because of this psychological preference for simple, either-or contrast that the ‘normed’ or middle term, occupying a ZONE OF INDIFFERENCE, tends to be ‘quasi-scientific rather than popular in character’ and that it is itself typically ungradable (?more average, ?more lukewarm). Nor is it an accident–as Sapir and Aristotle have both noted–that the zone of indifference must often be characterized negatively, as ‘neither X nor Y’.


Other important semantic relations include:


 

synonyms

words with the same meaning

sofa::couch, insect::bug

 

hyponyms

words in a kind relation

woman::animal, rose::plant

 

meronyms

words in a part/whole relation

finger::hand, leg::table


A semantic theory should provide an explanation for all of these types of semantic relations.



1.3.1 Denotations


Denotations provide an important of a speaker’s knowledge of word meaning, but we must ask whether denotation provides a complete theory of meaning. Names appear to be the purest form of denotations. They pick out individuals from world around us.


23

name: Midge

 

 

denotation: Midge

a small brown dog

 

 

 

 

name: Rinny

 

 

denotation: Rinny

a fox terrier

 

 

 

 

name: Keeper

 

 

denotation: Keeper

a faithful hound


Other categorematic words are predicates which denote sets of objects:


24

noun: dog

 

denotation: the set of dogs

 

 

 

adjective: brown

 

denotation: the set of brown things

 

 

 

verb: grin

 

denotation: the set of grinning things



1.3.2 Possible Worlds


Although this theory of meaning appears to capture much of what we know about word meaning, it is not complete. It only applies to what we know about objects in the actual world. Language turns out to be much more than a mirror of reality. We can envision all sorts of alternative worlds. Philosophers use the term possible world to refer to hypothetical realities. Word meaning seems to involve not only denotation in this world, but denotation in all possible worlds as well.


For convenience, the term extension is used to pick out denotations in the real world. The term intension refers to denotations in all worlds—the real and the possible. Kearns provides the following example:


25

noun: dog

 

extension: the set of all dogs in the real world

 

intension: the set of all dogs in all possible worlds

 

 

 

adjective: brown

 

extension: the set of all brown things in the real world

 

intension: the set of all brown things in all possible worlds

 

 

 

verb: grin

 

extension: the set of all grinning things in the real world

 

intension: the set of all grinning things in all possible worlds



1.3.3 Truth Conditions


The truth of a declarative sentence can be judged by comparing its meaning to actual states or events. If you hear the sentence Orville is flying, you can check its truth by seeing if, in fact, the entity named Orville is currently engaged in the action of flying. Philosophers have used this correspondence between sentence meaning and the conditions under which the sentence is true to provide a useful analysis of meaning. If you know the truth conditions for a sentence then you have a good idea of the sentence’s meaning. The extension of a sentence is its truth value, which can be true or false depending on the state of affairs in the real world. The intension of a sentence is the set of all possible worlds in which the sentence is true.


You should be wondering why all these possible worlds are necessary. Isn’t the actual world sufficient for all linguistic needs? Sentences with negation suggest more is required for sentence meaning than its correspondence to the real world. Consider the following sentences:


            Orville is not flying.

            Orville is not eating.


If both of these sentences happen to be true in the actual world we have no way to distinguish their meanings by referring to their extensions. They would have the same extension, namely true. A theory of sentence meaning based only on the truth conditions of these sentences in the real world predicts that they have the same meaning. But that is ridiculous! In order to evaluate the difference in their meanings, though, we have to envision alternative worlds in which one sentence might be true while the other sentence would be false. The sentences have the same extension, but different intensions. Thus, possible worlds are needed to distinguish between the truth conditions for sentences with negation.


This argument can be extended to any sentence that is false:


            The moon is made of green cheese.

            The seas start to boil at the equator.


Since both of these sentences are false in the real world, they have the same truth value. Their extensions suggest they have the same meaning, and we know this is not the case. By appealing to their intensions we can provide an account that would give them different truth conditions and, thus, different meanings.


Such truth-conditional semantic theories can still be considered a form of denotational theory since they attempt to account for meaning in terms of a denotation, albeit a denotation in all possible worlds. This approach to meaning originated in work by Tarski in formal logic. The philosopher Richard Montague extended this approach to natural language. Although there are many examples in natural language where an appeal to possible worlds is needed to provide an adequate semantic description, this approach has also encountered much criticism for the elaborate notions encapsulated under the concept of possible worlds. One of these criticisms is the problem of logical omniscience. How would all speakers of a language construct the same set of truth conditions for all possible worlds? Despite such problems, many linguists and philosophers have adopted some version of a truth-conditional theory since it allows for an explicit approach to the construction of sentence meaning.



1.3.4 A Compositional Theory


Kearns provides an example of a simple semantic framework to give you an idea of what a compositional approach would look like. Her example uses semantic extensions rather than semantic intensions. What other shortcomings can you find in her theory?


The Lexicon

29

Midge

SVal(Midge)

=

Midge

 

Keeper

SVal(Keeper)

=

Keeper

 

barks

SVal(barks)

=

the set of creatures that bark

 

grins

SVal(grins)

=

the set of creatures that grin


The Syntax

30

NP

—>

name

 

VP

—>

IV

 

S

—>

NP + VP

 

name

—>

Midge, Keeper

 

IV

—>

barks, grins


The semantic valuation (SVal) for the sentence (its interpretation) is given by checking to see if the named creature is a member of the set specified by the verb, i.e.:

 

32.       SVal(S) = true iff SVal(NP) ∈ SVal(VP)


The syntax produces a sentence such as:


            Keeper barks


We can interpret this sentence by applying rule 32 to it:

 

33.       SVal(Keeper barks) = true iff SVal(Keeper) ∈ SVal(barks)


This statement is just a formal way of stating the ordinary sentence:

 

34.       Keeper barks is true if and only if Keeper is a member of the set of things that bark.



















 

Things that grin

midge.jpg
buddy.jpg












 Midge Buddy

 

Things that bark





keeper.jpg
happy.jpg

 Keeper Happy


Kearns shows what can happen when you add the word and to the lexicon.



Object Language and Metalanguage


You may have noticed that throughout the preceding discussion some words appear in italics. One problem that we often meet in a semantics course is the need to use language in order to talk about language. The typographic convention is to put a word in italics or quotes if you mention it, but use a regular font if you use the words. Consider the following sentences:


            January has 31 days.

            January has seven letters.


In the first sentence I used the word January to refer to a particular month. In the second sentence I refer to the word January rather than the month it names. Problems can occur when we fail to distinguish the metalanguage (the language we use to talk about language) from the object language (the language we are analyzing). You should be careful to observe this distinction when you do the exercises for this class.


One advantage of the semantic framework that Kearns introduces is that it provides an explicit metalanguage for the object language sentences Midge grins and Keeper barks. The semantic valuation provides a translation into the metalanguage. This is an improvement over a semantic analysis such as MIDGE GRINS, but it is only as good as our knowledge of the elements of the set of grinning creatures.



1.3.4 Truth-based relations between sentences


Just as lexical items have semantic relations such as synonymy and antonymy, sentence have various types of semantic relationships. The entailment relation is a relation between the truth of sentences, or more accurately, a relation between the truth of propositions that sentences express. Sentence A entails sentence B if wherever A is true, B is true.


A sentence contradicts another sentence if both sentences cannot be true in any circumstances.



1.6 Presupposition


Presupposition is a special kind of entailment relation between sentences. The presuppositions of a sentence must be satisfied for the sentence to have a truth value. If a presupposition fails to hold, then the sentence does not have a truth value – there is a truth value gap. Strawson (1950) claimed that the sentence ‘The King of France is bald’ is neither true nor false because there is no person who is the King of France.


Since the presuppositions of a sentence must be true in order for a sentence to have a truth value, presupposition survives negation. More formally, if S presupposes P, then S entails P and not-S entails P. Some sentences, such as questions, do not have clear truth values, but do have clear presuppositions. Consider the sentence:


     When did Tom stop fiddling his taxes?


What is the negation of this question? Do wh-questions have truth values?



1.4 Implicature


Many times we infer a truth relationship between sentences which is not actually an entailment. Sentences can pragmatically relate to the truth of other sentences in which case we say that one sentence implicates another sentence. The philosopher Paul Grice first explored implicature. Grice begins with the idea that all linguistic exchange follows what he termed the cooperative principle.


            The Cooperative Principle

Make your conversational contribution such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.

            or

            Be helpful.


He argued that all participants in a conversation seek to be cooperative by contributing helpful information. Grice identified a number of conversational rules or maxims that embody the cooperative principle. Grice’s four maxims are now reduced to two:


            Maxim of Relation (also called the Maxim of Relevance)

            Be relevant.


            Maxim of Quantity (also called the Maxim of Informativeness)

            1. Make your contribution as informative as is required (for the current purposes of the exchange).

            2. Do not make your contribution more informative than is required.


Grice observed that it is possible to violate or flout conversational rules, and that speakers

do this frequently to communicate ideas indirectly. Grice dubbed such indirect messages a

conversational implicature and studied the way speakers implicate messages by flouting

maxims of conversation.


Assume you find the following entry in a ship’s log:


            The first mate wasn’t drunk last night.


At first glance, this entry seems to violate the maxim of quantity. It tells us something that we

would ordinarily take for granted. Because it violate the maxim of quantity, though, we can

assume the captain is implicating the first mate was drunk the previous nights.



Implicature vs. Entailment


Conversational implicatures resemble semantic entailment in that you can construct a relation

between two propositions that is either an entailment or an implicature. The main difference

between implicature and entailment is that you can cancel an implicature, but not an entailment.


For example, suppose our ship’s log read:


            The first mate wasn’t drunk last night, or any of the previous nights.


The added clause cancels the implicature that he was drunk the previous nights.

Compare this result to what happens when you try to cancel an entailment:


            ? Ian drives a Corvette, but he doesn’t drive a car.


We can state the rule for implicature more formally as:

            X implicates Y if

                        i. X does not entail Y

                        ii. the hearer has reason to believe Y is true based on the use of X and the Maxims of Conversation.


It seems rather paradoxical to propose rules for conversation that everyone violates. Grice claims

that his principles actually extend beyond conversation to other forms of human interaction.

Sticking to conversational exchanges, can you think of any examples that clearly violate Grice’s

Cooperative Principle?


Advertisers rely on implicature to make extravagant claims. How are Grice’s maxims exploited in the following claims?


            Campbell’s Soup has one third less salt.

            The Ford LTD is 700% quieter.

            Maytags are built to last longer and need fewer repairs.

            Mercedes-Benz are engineered like no other car in the world.

            Chevy trucks are like a rock.


Kearns notes that a presupposition is entailed by the literal meaning of a sentence and doesn’t depend on context. Implicatures, by contrast, depend upon context. What are the presuppositions of the examples above?



Scalar Implicature


Laurence Horn noted that words which denote quantities or degrees of attributes produce a scale of informative strength that is the source of what he termed scalar implicature. The prototype for such a scale is:


            6. (weak) < some, most, all > (strong)


The use of some makes a weaker assertion than most which is in turn weaker than all. Speakers obey the cooperative principle by using the strongest term that is consistent with what they know or believe to be the case. The use of a scalar expression implicates the negation of any term that is higher on the scale. Kearns discusses how scalar implicature works in stating how students did on a test.

 

         24 a.        Most of them passed.

                        implicature: Not all of them passed.

              b.        Some of them passed.

                        implicature: Not all of them passed.

                        implicature: Most of them did not pass.

              c.        Two or three did very well.

                        implicature: Not more than two or three did very well.


Kearns provides the following example with an adjective

 

            9          It’s quite warm out (Implicature: It isn’t hot)


This example suggests scalar implicature would be available for non-complementary adjectives, but not for complementary adjectives.


Kearns also notes the ways in which scalar implicature depends upon the context as well. The first member in the following pairs of sentences have a scalar implicature ‘not all’ which is absent in the second member


            16a Some cast members want to see you after the show.

                b The photographer wants some cast members for the photo.


            17a Some of you are working well.

                b If some of you work solidly the mess could be cleared by tomorrow.



Lexical Meaning or What are we talking about?

meaning2.gif

 





‘When I use a word,’ Humpty Dumpty said, in a rather scornful tone, ‘it means just what I choose it to mean, neither more nor less.’


‘The question is,’ said Alice, ‘whether you can make words mean so many different things.’


‘The question is,’ said Humpty Dumpty, ‘which is to be master – that’s all.’

 

Meaning is at once the most obvious and most mysterious feature of human language. The philosopher Quine (1961:47) claimed that until the development of ‘a satisfactory explanation of the notion of meaning, linguists in semantic fields are in the situation of not knowing what they are talking about’. In this section we investigate whether Alice or Humpty Dumpty has a better theory of meaning. Kearns uses terms such as sense, concepts and intension to characterize meaning, but she still leaves us with the basic problem unsolved. I explore a few approaches to theories of meaning in this section.


Reference Theories of Meaning


Reference theories of meaning identify meaning with reference in either the real or an imaginary world. If we want to establish the meaning of a word, we can point to its referent. Reference theories of meaning work well for concrete objects such as horses and unicorns, but not so well for abstract objects such as beauty and truth. The philosopher Frege pointed out the fundamental problem of reference theories of meaning is that they predict that any phrase with the same referent will have the same meaning. Frege’s example is that the phrases ‘the Morning Star’ and ‘the Evening Star’ have the same referent, but do not have the same meaning. Kearns’ Mr. Muscle Beach example illustrates the same point. The fact that words or sentences can have the same reference but differ in meaning forces us to conclude that there is more to meaning than reference.


Hilary Putnam (1988) argues that the intentional feature of meaning defeats “representational” theories of meaning. A simple representation would be a picture, but Putnam also has mental representations, semantic features, prototypes, mental and computer states in mind as representational approaches to meaning. Putnam’s argument is that representational approaches assume that meaning can be reduced to a disjunction of completely determinate properties. He uses Wittgenstein’s game argument to argue that meaning has an “open texture” or flexibility that eludes representational approaches. He states “it is precisely the open texture of reference that defeats the classical philosophical pictures.”


Putnam points to three features of meaning that he thinks are responsible for this open texture:


1. Meaning holism. Holism treats meaning as a corporate body that resists analysis (Quine 1960). Truth-conditional approaches assume the truth of each sentence can be tested independently of other sentences. Holism insists that the truth of each sentence depends on other sentences in the language. Meaning holism also undermines the usefulness of definition; in Putnam’s words “revision can strike anywhere.” His example is the change to the definition of momentum that accompanied Einstein’s revision of Newtonian physics.


2. Normativity. Putnam argues that meaning is in part a normative notion. Testing a scientific theory cannot be done by simply examining the definitions of its terms. Scientific investigation involves a number of intangible operations: estimating simplicity and weighing new observations against previous beliefs. Decisions about meaning, e.g. the “same meaning,” are just as complex, e.g. “momentum”, “electron”. We require a degree of charity to interpret word meanings; meanings “are preserved under the usual procedures of belief fixation and justification.”


3. Linguistic Division of Labor. Putnam is best known for his arguments that the denotations of words depend upon their physical and social environment. This argument stems from Kripke’s (1980) original argument that the denotations of names ultimately trace back to some original naming event. In other words the objects that correspond to the name, e.g. water, gold, depend upon the environment for their denotations. Putnam argues that we ultimately depend upon experts to tell us the denotation of words like water, gold and elm. The connection to the environment insures that meanings have an indexical component.



Concept Theories of Meaning


Concept theories of meaning provide a popular alternative to the reference theory of meaning. A concept theory of word meaning equates meaning with the mental concept named by a word. Concepts are themselves equated with the basic constituents of thought. Words provide procedures that enable us to construct concepts which can be combined to compose more complicated concepts just as words can be combined to form sentences. This approach equates language with vision in the sense that both of these input systems provide information to a common conceptual processor.


Both language and vision systematically underspecify the possible conceptual interpretations. Our minds have to supply additional information in order for us to interpret visual and auditory stimuli. Our minds, and the minds of other species, use common procedures to analyze input from all of our senses to construct concepts which serve as the content of the input information. The input stimuli constrain rather than fully determine the ultimate interpretation which is constructed from the immediate cognitive context. This account requires an internal language of thought, described by its most forceful proponent Jerry Fodor:


 

It’s entirely natural to run a computational story about the attitudes [beliefs, intentions and other kinds of thought] together with a translation story about language comprehension; and there’s no reason to doubt, so far at least, that the sort of translation that is required is an exhaustively syntactic operation... Syntax is about what’s in your head, but semantics is about how your head is connected to the world. Syntax is part of the story about the mental representations of sentences, but semantics isn’t. (1989:419)


The obvious objection to the concept theory of meaning is that we know as little about concepts as we know about meaning. Equating an unknown with an unknown doesn’t solve the question of what meaning is. Fodor’s syntactic vision of semantic content was originally implemented using a system of semantic markers which acted as stand ins for the mental concepts. The philosopher David Lewis summed up his criticism of the language of thought hypothesis in the following paragraph:


 

Semantic Markers are symbols: items in the vocabulary of an artificial language we may call Semantic Markerese. Semantic interpretation by means of them amounts merely to a translation algorithm from the object language to the auxiliary language Markerese. But we can know the Markerese translation without knowing the first thing about the meaning of the English sentence: namely the conditions under which it would be true. Semantics with no treatment of truth conditions is not semantics. Translation into Markerese is at best a substitute for real semantics, relying either on our tacit competence (at some future date) as speakers of Markerese or on our ability to do real semantics at least for the one language Markerese. (1970: 18-19)


Concept theories of meaning fail to predict certain aspects of meaning such as semantic relations. In principle, we have access to every possible concept, but in practice it can be extremely difficult to learn new concepts (e.g. sense, number theory) or to invent new concepts (e.g. ipads, cellphones).


1.7 Semantic Features


One approach to the concept theory of meaning uses an analogy with chemistry. The idea is to define certain semantic elements or features that can be combined to account for the meaning of words. Many words can be grouped together into semantic fields defined by semantic features:


20

human

man

woman

child

 

horse

stallion

mare

foal

 

swan

cob

pen

cygnet

 

cat

tom

queen

kitten

 

hare

buck

doe

leveret


The columns in this table pick out useful semantic dimensions such as SPECIES, MALE, FEMALE and JUVENILE. The meaning of the word horse seems to be partly composed of the semantic features ADULT and MALE. We need an additional semantic feature to distinguish between the rows in this table to separate the meanings of man and horse. Although some linguists have suggested terms such as HUMAN and EQUINE, such features are nothing more than synonyms for human and horse.


Burling (1964) pointed out that the names we give semantic features are arbitrary. Neither MALE nor FEMALE have a privileged role in naming a feature. An arbitrary name such as WXYZ would work just as well. There is no reason to think that EQUINE is any better than horse as a name for the feature. Wierzbicka (1980) has proposed a universal set of semantic features.



Meaning as Brain States


In an era of brain scans, it seems natural to identify meaning with brain states. This theory identifies the meaning of spoon with the neuronal connections the brain uses to process the meaning of spoon. Connectionism provides a computational implementation of a brain state theory. This theory predicts that patients with injuries in certain regions of the brain will experience difficulties processing word meaning. Unfortunately, such theories do not predict the specific words that patients with brain injuries will find difficult to process.


Theories that identify meaning with brain states have to bridge the gap between mind and body originally described by Descartes. Descartes proposed a mind/body dualism with a strict separation between the two spheres of existence.



Meaning and Use


We investigated several types of reference based theories of meaning of the sort Alice would approve and found that they all face difficulties discussed by Frege. Frege offered a Humpty Dumpty theory of meaning as an alternative. A usage based theory of meaning equates meaning with the ways that words are used. Dictionaries commonly employ a usage based approach in their definitions of word meaning. Linguists and philosophers sneer at dictionary type definitions, but they have yet to offer a viable alternative. Linguists have much to learn from exploring the practical approach that lexicographers use to construct definitions. We will explore the criticisms of dictionary definitions in the text book before looking at how a usage based theory meets the five tests we used for reference based theories.


Problem 1: Dictionary Definitions Use Words in Definitions for Other Words


A common complaint about dictionary definitions is that you have to know the meanings of the words the dictionary uses before you can understand the meaning of the word you are looking up.


The philosopher Willard Van Orman Quine proposed a dictionary type theory of meaning as a basis for his thesis of semantic holism. Semantic holism assumes that the meaning of every word depends on the meanings of other words, tying word meanings into a semantic net. The more we learn about the whole, the more we know about the meaning of each word. A change in our understanding of a word will affect our understanding of the whole. Quine’s semantic theory reflects his view of science where a single discovery can radically transform our understanding of everything.


A usage based theory provides a dynamic theory of meaning. It recognizes that we do not know everything (a state Putnam labels logical omniscience). We need to discover that the Morning Star and the Evening Star refer to the same planet. A usage based theory explains why we can use words such as elm or beech the way other people use them without being able to identify their exact referents. A usage based theory ties meaning to a linguistic community rather than to the mind of individual speakers.


Problem 2: Dictionary Definitions Include Function Words


Dictionary definitions cannot avoid using syncategorematic words such as the, of or to in their definitions. This practice seems to create a problem of accounting for our understanding of these words. Function words actually provide strong evidence in support of usage based theories since these words lack obvious referents and differ considerably in use from language to language. Some languages lack articles or prepositions altogether. Quine observed that the meaning of articles like the depends on whether languages also have plural markers or noun classifiers.



References


Burling, Robbins. 1964. Cognition and componential analysis: God's truth or hocus-pocus?

American Anthropologist 66.20-28.

Dopkins, Stephen & Gleason, Theresa. 1997. Comparing exemplar and prototype models of categorization. Canadian Journal of Experimental Psychology 51(3).212-230.

Fodor, Jerry A. 1975. The Language of Thought. Cambridge, MA: Harvard University Press.

Horn, Laurence R. 2001. A Natural History of Negation. Stanford: CSLI Publications.

Kamp, Hans & Partee, Barbara. 1995. Prototype theory and compositionality. Cognition 57(2).129-191.

Kripke, Saul A. 1980. Naming and Necessity. Cambridge, MA: Harvard University Press.

Lewis, David. 1970. General semantics. Synthese 22:18-67.

Putnam, Hilary. 1988. Representation and Reality. Cambridge, MA: MIT Press.

Quine, Willard Van Orman. 1960. Word & Object. Cambridge, MA: MIT Press.

Wierzbicka, Anna. 1980. Lingua Mentalis: The Semantics of Natural Language. Sydney: Academic Press.