Blog posts
These blog posts first appeared on the Cam Lang Sci blog run by PhD linguistics students at University of Cambridge, 2014-2017.
Bridging the unbridgeable?
Last week I dropped into an exciting event happening here in Cambridge, the ‘English Usage (Guides) Symposium’, organised by some folk over in Holland who are boldly Bridging the Unbridgeable. I must admit: it had nothing to do with my PhD. However, being a linguist, and therefore avowed descriptivist, but also a copyeditor’s daughter with the bad habit of tutting at every cheeky hyphen with aspirations of being an en-rule, I couldn’t resist.
They had brought together, authors, linguists, linguists-cum-authors, usage guide writers, usage guide revisers, journalists, a syntactician, and even Grammar Girl herself for two days of exchange and moderately warm debate.
The most interesting questions that were floated throughout the symposium were those getting behind the issues. Why are there usage problems? Where do they come from? And, what do ‘we’ do about them?
But, first off, what are these usage ‘problems’ that shelves of usage guides have been written to sort out? Any feature of a language – construction, word, phrase – which is thought by some speakers to not adhere to convention, or, worse, to be downright incorrect, ‘not the proper way of saying it’. There are split opinions over split infinitives. People are, like, unsure over the use of ‘like’ as a discourse marker. Dangling prepositions are something which people get het up about. Between you and I (or should it be me?), ‘literally’ as an intensifier is literally making steam come out of some folk’s ears. And so on. You get the idea. If you want to find some more examples, try Fowler’s Modern English Usage, Sir Ernest Gowers’ Plain Words, or perhaps the letters page of your chosen newspaper.
Where do these usage problems come from? One thing common to most comments about some problematic feature is the perception that it is a new(fangled) development. But this is almost always not the case. As David Crystal pointed out in his delightful talk on metalinguistic mentions in Punch magazine 1841–1901, the first mention of the dreaded split infinitive was in 1898 – and this is perhaps surprisingly late. Quotative ‘like’ (‘and she was like, “what’s bugging you?”’) is thought to have spread from California into British English in the 1980s.
What such comments do rightly hit upon, though, is that usage problems often arise with language change, perhaps as new words and grammatical constructions arise in the spoken language, but different, older conventions are adhered to in written communication. Together with this goes sociolinguistic variation – the coexistence of forms with similar meanings or functions in different sociolects, which leads to competition between them and some sort of value judgement. And, as was pointed out in the symposium by Pam Peters and Geoffrey Pullum, usage guides themselves, or rather their authors, sometimes practically invent usage problems, because of very personal opinion or the sheer need to comment on linguistic features and create an elitist ‘eloquent English’. So there’s a self-fulfilling aspect to this, too.
But once a linguistic feature becomes a ‘usage problem’, why does it gain a market share in the linguistic interest not just of ‘pedants’, but the public in general? Why is that everyone, it seems, if perhaps not entirely sure about when to use ‘who’ and when ‘whom’ has some sense that there is something they should know about this and expresses attitudes about different usages (and the users)? That was what one of the ‘Bridging the Unbridgeable’ researchers, Viktorija Kostadinova, was asking. Two main views emerged during the symposium. For some, like Grammar Girl Mignon Fogarty, what was clear was that speakers like to have quick, simple answers, black-and-white, right-and-wrong, for functional reassurance (‘if I say that in a job interview, will I be disadvantaged?’). I suspect there may be some appeal of feeling superior as well (inwardly tutting at those who apparently don’t know better). For Geoffrey Pullum, on the other hand, our obsession with correct usage comes instead from a grammatical masochism – we want to punish ourselves by finding out all the rules of our language, and how we’re doing things wrong. Why, he asked, are we happy to consult usage authorities older than our grandparents when we wouldn’t dream of consulting a medical or physics textbook from the 1920s? It must be a pleasure in our mistakes. I’m not so sure about that. One idea that did strike me though, from Robin Straaijer, was the observation that, as we spend a lot of time investing in learning our (written) language – 13 school years here – we are then loathe to discover that that’s not how you say it any more.
So what do ‘we’ do about them? By ‘we’ here, I mean mostly linguists, who are often called upon, whether by friends or a newspaper journalist, to comment on controversial linguistic features. Three options were presented last week. Firstly, as the Bridging the Unbridgeable team are setting out to do, we can descriptively investigate the sociolinguistics of this phenomenon – what are user’s attitudes to which usage problems? Secondly, we can help nudge usage guides from the Arts (personal opinions about best usage) to the Social Sciences (based on actual examples of usage), by providing historical linguistic information about the emergence, or decline, of conventions (put forward by Pam Peters). Thirdly, we could try to save speakers from their apparent ‘grammatical masochism’, by pointing to linguistic features for which, even in a ‘standard’ variety, there appears to be no clear majority view on convention. Whichever route is taken, it seems that dialogue in this area is important, as it is through usage guides and usage problems that many people begin to be curious about language.
Where have all the implicatures gone?
In any Pragmatics 101, you’ll learn that Paul Grice, one of the fathers of the field as we know it today, originally proposed four maxims fleshing out his Co-operative Principle for communication: quality, quantity, relevance, and manner. Relying on the assumption that these maxims hold of their interlocutor, hearers make inferences from the speaker’s utterance: pragmatic enrichments of the literal semantic content – what the speaker meant, though didn’t literally say. These aspects of the meaning are called implicatures.
—
Grice’s Maxims
Quality:
Try to make your utterance one that is true:
1 Don’t say anything you believe to be false.
2 Don’t say anything for which you lack evidence.
Quantity:
1 Make your utterance as informative as is required (for the
purposes of the discourse).
2 Don’t make your utterance more informative than is
required.
Relation: Be relevant.
Manner: Be perspicuous:
1 Avoid obscurity of expression. 2 Avoid ambiguity.
3 Avoid unnecessary prolixity. 4 Be orderly.
—
Now, subsequent theorists – neo-Griceans and post-Griceans – have, rightly, pointed out that Grice’s four maxims are not the be-all and end-all – they include interrelations and redundancy, and Grice himself suggested that there may be others besides. With the exception of Relevance Theory, though, later theories have maintained the plurality of maxims, for example Horn’s Q and R principles or Levinson’s Q, M and I (Horn 1984; Levinson 2000). They’re all assumed to be able to cover at least a basic diversity of cases such as these (where +> indicates the implicated meaning and an informal reasoning is given in brackets):
Mavis: Would you like a camomile tea?
Mary: I need to work late tonight.
+> Mary does not want a camomile tea
(given the world knowledge that camomile tea is soporific, it relates to the question as a negative answer as it would not aid working late)
Bob: Did you cycle to Brighton?
Ben: I cycled to London.
+> Ben did not cycle to Brighton
(given the knowledge that Brighton is further from Cambridge than London, had Ben cycled further, he would have said so)1
John made the car stop.
+> John made the car stop not in the normal way
(otherwise he would have used the conventional phrase ‘stopped the car’)
Terry: Did you eat the cookies?
Tom: I ate some (of the) cookies.2
+> Tom did not eat all the cookies.
(Given that Tom knows how many cookies he ate, if he had eaten all of them, he would have been informative and said so)
These are examples of relevance, manner, quantity ad hoc and quantity scalar implicatures, respectively. However, if you trawl through a database of academic articles for studies on the subject over, say, the last fifteen years, you will find almost exclusively the final case to be the only one. Scalar Implicatures rule the pragmatic roost at present. But, as was said at the recent Formal and Experimental Pragmatics Workshop at ESSLLI (of which proceedings here), we need to remember that ‘some’ is not the only word.
Studies have been restricted to this one type of implicature, a sub-type of quantity implicature, scalar implicature. They are, in many ways, a paradigmatic case, and the basic intuitions about them, according to standard theory, are pretty clear. Furthermore, there arose some intricate debates about particular cases (if you’re interested, the default vs nonce and globalist vs localist battles) which kept the theoretical market buoyant with new theories and counter-examples, and the experimentalists in a job testing all these theories.
However, while this means that we might be making some progress on understanding something about Scalar Implicatures, and perhaps Quantity Implicatures in general, what we know about Manner and Relevance is lagging behind. And this is unsatisfactory, because, on a Gricean view, we want a unified approach to these different inferences. We also don’t know much about how the different inferences interact. What happens when multiple inferences could be derived from a single utterance? How does one support (or interfere with) the other? For example, relevance-type inferences may well be crucial in generating, or constraining, the alternative utterances that are negated as part of Quantity Implicature derivation (e.g., all is the stronger alternative negated to enrich some with the meaning ‘some and possibly all’ to ‘some and not all’).
But further, as Bart Geurts pointed out in his talk on Co-operativity at the Workshop, work on implicature has also restricted itself to only one type of speech act – assertion – while it is clear that other speech acts may also yield implicatures:
1 Where did you last see your poodle?
+> That may help you to find it.
2 Shoot the piano player!
+> The drummer can stay.
3 Do you have a pen or pencil?
+> Either will do.
(Taken from Bart Geurt’s talk – slides available here)
This imbalance takes on another hue from the perspective of my research in acquisition. Work on language acquisition is always a bit chicken-and-egg: we want to look at how children acquire a certain feature of language, and to do so we need to know what that feature of language is. This makes it rather problematic when it comes to developmental pragmatics: how can we investigate how children learn to derive implicatures when we’re not sure how adults process them? On the other hand (the egg-first perspective), looking at how children acquire a linguistic feature can tell you a lot about its nature. And that’s where work on the big picture of children’s pragmatic competence (or lack of it) is exciting for theorists too3.
This year saw a waymark in the field of developmental pragmatics with an edited volume with the does-what-it-says-on-the-tin title Pragmatic Development in First Language Acquisition (Ed. Danielle Matthews). There are chapters on the state-of-the-art of speech acts, metaphor, irony, evidentiality, prosody, conversation, word learning, and – you guessed it – scalar implicature. But were Manner and Relevance anywhere to be seen?
If we want a really Gricean view – in which speakers are always pragmatic as part of a more general rationality and co-operativity – we need to broaden our attention to include more types of inference – in processing as well as acquisitional studies. Here endeth the plea.
1 And a real-life one of the same type from just the other day (in case you were thinking that this is just something linguists like making up):
Vet student friend: I treated a hedgehog and a squirrel this week.
Teacher friend: Are they still alive?
Vet: The squirrel is.
+> The hedgehog is not alive.
(Given that the vet student is in a position to know about both animals, and, if both were alive, he would have been informative and said so)
2 There are interesting lexical factors at work in some implicatures. Studies have consistently shown that hearers derive a scalar implicature more often if the utterance contains some of the than if it contains plain some (and equivalents in other languages, e.g., Degen & Tanenhaus, 2011; Pouscoulous et al, 2007). However, my own intuition is that the ‘some and possibly all’ reading of some of the may be very hard to get, for instance, to me (a) seems fine while (b) is somewhat odd:
a) I ate some cookies. In fact I ate all of them.
b) I ate some of the cookies. In fact I ate all of them.
What do you think?
3 This is what my PhD research is (partly) about – so watch this space for more on this topic in future posts!
References
Degen, J., & Tanenhaus, M. K. (2011). Making inferences: the case of scalar implicature processing. In Proceedings of the 33rd annual conference of the Cognitive Science Society (pp. 3299–3304). Cognitive Science Society Austin, TX.
Grice, H. P. (1989). Studies in the Way of Words. Harvard University Press.
Horn, L. (1984). Toward a new taxonomy for pragmatic inference: Q-based and R-based implicature. Meaning, Form, and Use in Context, 42.
Levinson, S. C. (2000). Presumptive meanings: The theory of generalized conversational implicature. Cambridge, MA: MIT Press.
Matthews, D. (2014). Pragmatic Development in First Language Acquisition (Vol. 10). John Benjamins Publishing Company.
Pouscoulous, N., Noveck, I. A., Politzer, G., & Bastide, A. (2007). A developmental investigation of processing costs in implicature production. Language Acquisition, 14(4), 347–375.
Whistle while you talk
One of the linguistic stories that’s been doing the news round this week has been about the kuş dili language of Kuşköy in Northern Turkey. It’s one of several documented whistled languages, typically used by peoples living in mountainous or forested terrain to communicate over distances. Such languages aren’t the community’s only language, but are based on their spoken language, like Turkish. Now, the story this week didn’t concern the discovery of this language – actually researchers have known about it for decades – but rather some new findings about how speakers’ (or whistlers’) of kuş dili (or ‘bird language’) brains are functioning when when they speak (or whistle).
Hearing about such radically different linguistic systems of our own often leads us to react with amazement, and some intrigue. Perhaps like when we hear that there’s a language with only 8 consonants or no tenses. How is it possible?
The interesting thing is that we are really asking: how is it a possible language? We have no doubt that it is a language. How do we know this?
Well, there are two ways of viewing language. One is the code-based model, which is based on association – between a signal and something in the world, and between a signal and response. The word ‘cat’, for instance, is associated with something in the world, namely all cats. Whistled languages are clearly possible from this perspective, because the whistles encode some meaning; they are associated with a meaning, which is something out there in the world, albeit out there in people’s minds. (And at this point lets not get into the debate as to how complex this system of associations has to be to qualify as ‘language’).
The other view is the ostensive-inferential communication model. Sperber & Wilson, from a cognitive perspective, and Scott-Phillips, from an evolutionary perspective, would argue that this is the key to human communication – and human language. Indeed, it is what makes human language what it is, and so very different from any animal communication system. The idea is this: what enables us to use the conventions of language (the phonemes, morphemes, and syntax of English, or whatever other variety we happen to have learnt), is our amazingly prosocial nature. We are able to use signals intentionally to communicate a message. And not just with the intention that the hearer (or viewer, if we are making a gesture) understands our signal, but rather that they recognise that we intend to make a signal and that we intend for them to recognise our intention (that’s the ostensive-inferential bit).
This means, to take Scott-Phillips’ example, that tilting our coffee cup towards a waitress in a café whilst catching her eye is understood as a request for a top-up; tilting our cup as a result of an animated discussion with our companion, on the other hand, is not taken by the waitress as any sort of communication on our part. When we use language, it’s just the same. Language lets us get across much more precisely and extensively our meaning, but it is always underdetermined (unlike mathematical languages, for instance) – we can never articulate every single aspect of meaning that we intend to communicate in that context, and frequently we don’t even try, knowing that our interlocutor can ‘fill in the rest’.
So, back to our whistled languages. We recognise them as languages because they do use a complex inventory of sounds and structure to encode a message (the code aspect of language), and because they are used as language. The whistled messages are intentionally directed to a hearer for them to recognise the whistler’s intention and their intended meaning. (And, incidentally, if you wanted to say that they aren’t languages proper, this still makes them much more like language then other animal communication systems, just as with our gestures).
A not uninteresting blog post
Living in a pictoresque and historic city has many perks. One of them is being frequented by film crews, who want Cambridge as a period backdrop – with a bit of blacking out the double-yellows and covering over the odd signpost, you can easily get the 1880s, the 1930s or the 1960s, as takes your fancy. This month’s cinematic excitement in town has been the filming of ITV’s Grantchester, which is actually set in the nearby eponymous village. Of course, the local free paper could not resist a wittily-entitled piece. And it caught my eye for another, linguistic, reason.
In the article, a (real) local vicar talks about the drama’s Rev Sidney, and comments: “… although they don’t show them in full, on the show they sometimes round it off with a bit of a sermon. They are certainly well-written, and he is not unprofound.”
Now, perhaps you think there’s nothing particularly remarkable about that. But what I found interesting was the last phrase, containing that double negation ‘not unprofound’. Why say that, rather than simply ‘profound’?
I think there are two steps to understanding the speaker’s choice here: a semantic one and a pragmatic one. First up, the semantics of negation. At least in English, adding un- to the front of an adjective (like ‘profound’) can have two effects. Have a look at these examples:
happy – unhappy
wise – unwise worthy – unworthy
intelligent – unintelligent impoverished – unimpoverished
friendly – unfriendly aware-unaware
fair – unfair reliable – unreliable
kind – unkind blemished – unblemished
Now, individuals’ intuitions about this do vary, but generally, the column on the left contains pairs which are contraries: the negated adjective, with un-, means the extreme opposite of the positive. Formally, a pair of adjectives are contraries, if they can be simultaneously false, but not simultaneously true. For example, you can say ‘Bob isn’t happy, but he’s not unhappy either’, but you can’t say ‘Bob is happy, and he’s unhappy, too’. To put it another way, there is a middle ground between contraries, where you’re neither friendly nor unfriendly, neither kind nor unkind, and so on.
Unfriendly Friendly
——————————— ………………………………————————————
Pairs of adjectives in the right column, on the other hand, are contradictories: the negated adjective, with un-, is just the opposite of the positive, the absence of the positive. These pairs cannot be true at the same time or false at the same time. You can’t say ‘Bob is neither aware nor unaware of the situation’, and nor can you say ‘Bob is aware and unaware’. In other words, there in no middle between these terms: you have to be one thing or another.
Now these distinctions were observed right back by Aristotle [Aristotle square of opposition], and they seem to have to do with the nature of the positive adjective – whether it’s something that can have degrees, whether it’s gradable. You can be more or less friendly, but you can’t be more or less aware. However, the important thing for us is that there is this distinction between two types of negative adjectives.
So which column does ‘profound-unprofound’ go in? I have to say, my intuition is not clear. And the OED say that it means ‘not profound; shallow, superficial’, which suggests that it could go in either. ‘Not profound’ is the contradictory, whereas ‘shallow’ is the contrary. How can we tell what our vicar meant here?
The question then is, what happens when you stick another negative, ‘not’, on the front: ‘not unhappy’ vs ’not uninterested’? Well, with the contraries, like ‘not unhappy’, you get a meaning that could be ‘happy’ or ‘neither happy or sad’: it encompasses the extreme opposite and the common ground. And because we don’t have a single word with that meaning in English, we can see a good motivation of using a phrase like ‘not unhappy’. But with contradictories, ‘not unaware’ logically means ‘aware’. So why on earth would we say it?
Well, this is where the pragmatics comes in. If the speaker has gone to extra lengths to say ‘not unaware’, when they could have saved themselves a couple of syllables with the simple ‘aware’, this must be because they want to communicate something extra. Something that using ‘aware’ on its own would not convey. Take these examples:
What’s the service like at your local garage?
It’s not unreliable.
+> It’s not as reliable as ‘reliable’ would suggest.
I was not unaware of the situation.
+> I was acutely aware of the situation.
The exact inference depends on the context, but it seems you might get the kind of inferences indicated by +>. Either there’s a tempering of the meaning of the positive term, or there’s an intensification of it.
So back to our newspaper clipping, ‘he is not unprofound’. If ‘unprofound’ for the local rev means ‘shallow’, then he is simply saying that it is not the case that his TV counterpart’s sermons are shallow. But if his mental lexicon has ‘not profound’ for ‘unprofound’, then choosing the phrase ‘not unprofound’ is implicating something more. And in the context, that the sermons “are certainly well-written”, we can infer that he means that they really are profound.
But to find out whether you agree, you’ll have to wait til next year, when the next series of Grantchester is aired.
Game Theory
Tis the season to be merrily playing board games. At least in our family. Recently we were given a rather good new one by some friends, called Hanabi (link), and all our guests and family members have been subjected to it. It’s definitely a game for a pragmatician like me.
My interest was first whetted when I was told, upon presentation, that it is a co-operative game. Now, regular readers of this blog might remember previous posts on Pragmatics, introducing a chap called Paul Grice, a British philosopher and linguist whose thinking is foundational for much present-day pragmatics. His big thing was The Co-operative Principle: “: “Make your contribution [to the conversation or exchange], such as is required, at the stage at which it occurs, by the accepted purpose or direction of the talk exchange in which you are engaged.” Now, you might think that such a statement itself is a bit obtuse and not very, well, co-operative. But it’s basically saying, say the right thing at the right time in the right way.
We do this all the time when we communicate. When I woke up the other morning and exclaimed to my husband ‘the bin’, he had no trouble inferring that I meant something like, “help! it’s a Thursday – we must get the black wheely bin out at once to avoid having smelly rubbish on our hands for the next fortnight!” But in another context, that might not have worked, or it might have meant something entirely different. Most of the time we know how much information we need to convey to our conversation partner.
But what happens when there are some extra constraints on our communication? That’s where the fun of Hanabi starts. All the players have to work together against the game, to construct a wonderful fireworks display for the Japanese Emperor (though the back story is not crucial at all). Everyone holds their cards facing outwards, so I can see everyone else’s cards, containing elements of the fireworks display, but not mine. Players have to communicate pieces of information to other players, so that they know which of their own cards to play or discard. ‘Easy!’, you might be thinking. But wait a second, in any one go you can only communicate one piece of information about the number or the colour of the cards in one other person’s hand. So for example, if I’m looking at your cards (below), I might say “you have two fours”, or, alternatively, “you have one red”, and point those cards out.
So, as the speaker, I have to decide not only what the most useful thing for another player to know is, but also how I can communicate that to them. I have to take into account what we both know about the game so far, what the most salient aspects of the game currently are, what information other people have already communicated and how. In other words, I have to ‘do pragmatics’ – what we all do all the time when we’re chatting, or writing a blog post. But the difference here, playing Hanabi, is that, like the cards, it’s on the table. The reasoning I’m doing about what I want to infer and what inferences my chosen other player will make is conscious (and sometimes somewhat tortuous). Usually making a co-operative contribution to some conversation requires complex but pretty subconscious reasoning; in Hanabi, the interesting twist is that both speaker and hearer are paying conscious attention to it. And I wonder what difference that makes?
From my own anecdotal experience, it can be extremely difficult, as the hearer, the player given information to work out what someone else intends me to infer, precisely because I’m giving equal consideration to the numerous options: ‘now, has he told me that’s a red, because he wants me to play it now, or play it later, or it’s no longer useful and can be discarded, or…?’ That’s the kind of quandary we usually only find ourselves in in situations of miscommunication, when it’s really unclear what someone meant. (Another domestic example: Me – have you washed the pots? Husband – yes. Me – but they’re still muddy. Him – oh, I thought you meant pans, not potatoes!’).
Just in case you’re wondering whether this is a nice ol’ ramble, but a bit far removed from any serious linguistic content: a couple of pragmaticians did actually conduct a study not a million miles removed from Hanabi, in which participants had to communicate which object an interlocutor should pick up using only colour or shape, to find out whether speakers can refer to objects optimally, and hearers can interpret as the speaker intended. (Yes, but it’s complicated…) My experience of Hanabi, though, makes me wonder how much people’s communicative behaviour changes when they’re placed in such a peculiar game situation; how much does it tell us about everyday linguistic reasoning?
But as for me, it’s back to some more playing at pragmatics.
Ciyang Qing and Michael Franke (2015). Variations on a Bayesian Theme: Comparing Bayesian Models of Referential Reasoning. In: Bayesian Natural Language Semantics and Pragmatics, Ed. by Hans-Christian Schmitz and Henk Zeevat, Heidelberg, Springer
Bon mots? Au contraire!
Jeremy Paxman’s opinion piece in the FT last week, in which he called the French language ‘useless’, has, unsurprisingly, caused something of a furore.
The articles about his article picked up on such quotable lines as:
‘It is time to realise that in many parts of the world, being expected to learn French is positively bad for you’.
“The outcome of the struggle is clear: English is the language of science, technology, travel, entertainment and sport. To be a citizen of the world it is the one language that you must have.”
Now, when you come to look at the whole piece (unfortunately accessible only via FT subscription), most of it focusses on language policy in La Francophone, the group of countries in which French is spoken, a hangover of its colonial past . Now that’s mostly too political, historical and economic for me, a linguist in training, to get my teeth into. But I would like to comment on a couple of parts of Paxman’s argument, to give the view from linguistics.
Firstly, one of the reasons he gives for stopping the teaching of French in Francophone countries is the dominance – ‘victory’, even – of English. That’s the only useful language to know, being “the language of science, technology, travel, entertainment and sport.” But wait, not so fast, Mr Paxman. Granted, English as more first language and second language speakers than French (300-400 million first language and up to 1bn second language, compared to 80 million and 220 million), and granted that it is widely used, especially in academia. But I wonder whether it would come as a surprise to know that less than half of internet content is in English, or that around 6bn people, over 80% of the world’s population do not speak English? If cross-cultural communication is something we care about, then English is not the only language worth knowing.
Secondly, the application for us Brits is don’t bother with French. Well, okay, Paxman does make it a bit more nuanced than that: “If you are a native English speaker, by all means learn Chinese or Arabic or Spanish. If you must, study French, because it is a beautiful language. But let us have no truck with suggestions that it is much worth learning as a medium of communication.”
Thankfully, he’s not advocating not learning any foreign languages (although you might think the paean to the usefulness of English strongly implies it), and, personally, I might well agree that, given a choice, it would be good to have more people learning Mandarin, Urdu, Farsi or Russian (to take a few currently required by GCHQ), rather than French. But, given the intertwined sociolinguistic history we share with our neighbours, learning French can be a fascinating way into language learning. Learning one foreign language equips you with metalinguistic knowledge and cognitive strategies that help you, so while French remains the only option, sadly, for some at school in UK, it should not be discouraged. Not to mention the many uses it does still have in business and diplomacy (to take one example, the UK is one of the top 6 foreign investors in Morocco, where French is the business language).
The really unfortunate thing about Paxman’s opinion piece – and of course, he is entitled to an opinion – is that it’s full of pithy pull-outable lines that have the potential to cause much more damage out of context, the worst offender being: “ the real problem with French is that it is a useless language.” Calling any language useless, one has to ask ‘for what?’ And ‘for whom?’ It may be that some languages are politically or culturally more strategic to learn for different people at different times, but no language, while still alive, is ever useless – for its speakers, however few or many, it is their means of communication, and therefore incredibly useful.
- COMMENT Voilà – a winner in the battle of global tongues; Opinion
By Jeremy Paxman, 8 April 2016, Financial Times
Go pop!
I’ve got a confession to make. I love pop science books. Sometimes, as an academicky type, I wonder whether I should, as there’s so many interesting hard-core research articles and weighty tomes to read, and so little time. But actually, there’s some good reasons to read pop linguistics books, even if you are a linguist. I’m talking here mostly about books written by academics for a general audience, rather than those sometimes poorly researched compendiums that adorn bookshop tables in the run-up to Christmas. Here are my top 3 reasons and top book recommendations for this week.
- Broaden your horizons. Poking your head out from your own research nook and finding what’s going on in other crannies is not only interesting but can spark new ideas and make you aware of connections you hadn’t previously seen. And what better way of doing that than with a book you can relax with on the beach, in the bath or under the bedsheets.
- Get the bigger picture. At least in my neck of the linguistic woods, developments tend to be announced in journal articles and conference papers rather than books, and therefore are often very tightly focussed. It’s so easy to get bogged down in experiment methodology, stimuli or statistics. A book helps you to see the wood for the trees, especially one that’s written for non-specialists and so has a strong narrative thread or polemic stance.
- Be inspired. Some academics I’ve met seem to find the idea of public engagement ridiculous or unnecessary. I wonder whether it’s because there’s an implicit threat that ‘your research isn’t worth anything, unless you package it up so it’s useful to some other community, right now!’ But reading great pop science books makes it so clear that that’s not the case at all. Of course, research work has its own value, and this goes hand-in-hand with communicating it far and wide.
On my bookshelf:
- The Stuff of Thought, Stephen Pinker
Every linguistics undergrad comes up to university equipped with some Stephen Pinker. Usually it’s The Language Instinct, that transforms every young reader from an intuitive Sapir-Whorfian to a fully-formed nativist. But, being a pragmatician myself, I would commend this later work, which unpacks some key themes in semantics and pragmatics, while being full of delightful and witty examples. - Neurotribes, Steve Silberman
This isn’t strictly about language per se, but it still makes the cut. It’s a history of thinking and research on autistic spectrum disorder – which of course can profoundly affect communication and language – and for those of a younger generation for whom the condition is now common parlance, this book is truly eye-opening. I would never have thought that a whole chapter on the development of a diagnostic tool could be a page-turner, but this book kept me up late on several occasions. - Pieces of Light, Charles Fernyhough
Another not-quite-language offering: this book is about memory. And of course, without memory we have no language. Written by a psychologist-cum-author, it’s a beautiful introduction to the wonderful, mysterious world of our memory, and the tragedy of losing it. Psycholinguistics classes are often filled with mysterious initials, like ‘JBR’ and ‘MD’ – famous case studies of aphasia, where the person has lost some aspect of language (which can be as broad as productive syntax, or as narrow as words for particular categories like vegetables), often as the results of a stroke or other brain damage. But the focus on the – admittedly fascinating – insights this gives us into language itself can make us forget the tragic consequences for these people’s lives. This book is a moving reminder. - Speaking Our Minds, Thomas Scott-Phillips
This is a beautifully – and forcefully – argued treaty on how human communication is so special. Scott-Phillips explains how when we talk with one another, we’re not just doing encoding and decoding, like a machine, but a complex process of inferencing about each other’s intentions. These kind of observations have been very influential in one of the main areas of linguistic study – pragmatics – over the last half century, and this book also adds an evolutionary perspective (though disclaimer: I haven’t read those chapters yet). - Wordsmiths & Warriors, David and Hilary Crystal
There had to be something from our British resident celebrity linguist on the list. There are too many books to choose from here, but a fun recent(ish) addition is this linguistic guide to UK, complete with photos. The entries are varied – they could be about a place with some significant connection to the history of English, the location of an important historical linguistic source, the home of some important linguistic work – and complete with photographs.
C what I mean?
A (belated) happy mother language day! If you missed it yesterday, you can catch up on the what and the why here.
One family of languages that could never be counted as mother tongues are programming languages. Yet various US states are considering allowing coding classes in schools to count alongside Spanish, Chinese or Italian lessons towards foreign language learning requirements. Last week, as a bill with this kind of suggestion was being debated in Florida, the popular linguistics writer Gretchen McCulloch was asked how natural languages differ from programming languages (and so why this is a bad idea).
Here, with quite a bit of help from my software engineer husband (Cambridge ain’t called Silcon Fen for nothing), I consider some more differences as well as some similarities between programming languages and natural languages.
- First up, syntactic ambiguity
As Gretchen McCulloch mentioned, natural languages like English are often syntactically ambiguous. What do we mean by this? Take the following examples:
A boy climbed every tree.
> There was a boy and that boy climbed every tree (i.e., one boy did lots of climbing).
> For every tree, there was a boy that climbed it (but not necessarily the same one).
I’m not going to give a talk in London on Thursday
…. I’m going to attend a talk
… I’m going to give a talk in Brighton
… I’m going to give a talk in London on Friday
The girl saw a man with a telescope
> the girl saw a man who had a telescope
> the girl through a telescope saw a man
That is, there is more than one possible mapping from the surface form to the meaning of the utterance. Now, in natural languages, the context, as well as prosodic cues like stress in speech, allow us to disambiguate the intended meaning fairly easily. In contrast, as Gretchen McCulloch says, “Formal languages don’t want you to do that.” Indeed, most programming languages have a perfect form-function mapping between syntax and semantics. So, more properly, they don’t allow you to do that. However, some languages, like Haskell and C++ do actually allow ambiguities to be written. Consider the following sentence of English:
If it’s raining tomorrow, then if I need to go shopping, I’ll take the car; otherwise I’ll go on my bike.
Admittedly, it’s fairly unlikely that someone would construct a sentence like that in spontaneous speech. But assuming they did, then the listener hits the problem of how the ‘otherwise’ clause resolves – is it attached to ‘if it’s raining tomorrow’, or to ‘then if I need to go shopping’? In other words, what happens when it’s raining but I don’t need to go shopping, or if I need to go shopping but the sun is shining? This kind of structure occurs in programming languages too (https://en.wikipedia.org/wiki/Dangling_else), but, unlike a human listener who uses context to work out what the speaker meant, the compiler simply throws an error when it meets it; the programme has to specify how the structure is meant to be disambiguated.
Secondly, and more briefly, implicated meaning. And of a particular sort: in natural languages, speakers can convey meaning not only through what they say, but also in how they say it – the forms that they use. For example, saying ‘Might I possibly ask you to close the window?’ conveys not only a request but also the fact that the speaker is being polite and respectful. Similarly, if I tell you that ‘yesterday Bob was driving along when, suddenly, he caused the car to stop’, you wonder if he pulled the handbrake or hit a pothole (or even a tree), otherwise I would have told you ‘he stopped’.
In programming languages, just like in natural languages, there are usage conventions. However, these are for the benefit of the human reader, not for the computer. A software engineer might look at some code and infer something about its style, what kind of experience the programmer has, and so on, but this isn’t part of the communicative act – the compiler, who plays the part of the interlocutor, doesn’t care about any of those things.
Thirdly, linguistic change. This is a characteristic of natural language that all speakers are aware of. Often, this comes in the form of language pundits who bemoan the use of like as a quotative or the singular gender-neutral use of ‘they’, or the many other ways English (or any other language) is thought to be going down the drain. Language change is inevitable, and happens not only at the level of word meanings, but also sounds and syntactic constructions. It happens gradually over time as children acquire language from limited input, and as speakers use language and interact with speakers of different varieties and languages.
Language change happens in programming languages too. However, the kind that most closely parallels natural language change is change in usage, not in the grammar or lexicon: for instance, programmers might notice that a particular construction that was allowed by a language but not really used very often, is actually more useful than they initially thought, and start employing it more. Changes to the grammar or lexicon, though, are decided by committee (for instance, Java 8 now allows the kind of ambiguities we were talking about earlier) – after all, when you only have 15 words in your language, changing the meaning of one is a pretty big deal. And of course, such conscious en masse decisions are something very rare and usually ineffective in natural languages.
You can think of many other ways that Java differs from Javanese, Python from Tok Pisin, and Swift from Spanish – and I may revisit the theme in a later post.
In a manner of speaking
Way way back many blog posts ago, I wondered why some pragmaticians have been so obsessed with eating cookies. Well, not exactly, but they have spent a lot of time investigating utterances like:
Ben ate some of the cookies.
Some pupils failed the exam.
On a standard view, these utterances literally mean something like “Ben ate some and possibly all of the cookies” and “Some and possibly all pupils failed the exam”, but in the right context, the hearer infers the speaker’s intended meaning that “Ben ate some but not all of the cookies” and “Some but not all pupils failed the exam”. These implicated meanings are known as scalar implicatures (‘implicature’ being a technical term coined by Paul Grice for non-deductive implications beyond the literal meaning of what a speaker says, based on assumed principles of co-operative conversation). That’s because the key word in the utterance, here ‘some’, belongs on a scale with some alternative word that the speaker could have said but didn’t (like ‘all’).
And we can think of other examples like:
The coffee is warm
+> but not hot.
The concert was good
+> but not excellent.
There are loads of reasons why pragmaticians (and especially the experimental sort) have concentrated on the ‘some but not all’ implicature: it’s easy to depict visually, it’s pretty consistent across contexts (or without much context – good for controlled but not very natural experiments); it’s easy to make nice balanced stimuli by just changing one word, and so on. However, what we’re now learning is that ‘some’ is perhaps not so representative of scalar implicatures after all. And if we can’t even generalise from ‘some’ to scalar implicatures, what about quantity implicatures (of which scalars are a subtype) or other kinds of implicature, manner and relevance?
Given the apparent dearth of research on manner implicatures, I decided to do some investigating myself. Now, manner implicatures arise when speakers some marked form to convey a marked meaning; an unconventional phrase to express that what they’re describing is not a stereotypical instance. Grice’s own example was:
‘Miss Singer produced a series of sounds corresponding closely to the score of an aria from Rigoletto’
The idea is: why did the speaker go to such lengths, when they could have just said ’sang’? It’s because the singing was in some way not stereotypical – probably downright awful!
Here are some other potential examples:
Ben constructed a pile of bricks and mortar.
+> Ben built a wall, if you can call it a wall.
(Otherwise the speaker would have said “built a wall”)
Mary caused the car to stop.
+> Mary stopped the car in some unusual way (e.g., pulling the handbrake, driving into a tree…)
(Contrast with “stopped the car”)
Terry put the duvet and pillows on top of the bed.
+> Terry made the bed, but messily.
(With the alternative “made the bed”)
Now, why have these inferences received so little attention? One possibility, is that they’re not really a definable category on their own, but really a motley bunch of quantity implicatures, conventions and other stuff (as, for example, Horn would have it). However, the fact that what is important here is the form of the utterance rather than the content (the lexicon and syntax, not the semantics), means that they are in some ways distinct, and at least in principle worthy of research in their own right (as Levinson, 2000, thinks) – even if, in the end, we find out they’re not so interesting after all.
Another possibility is that they’re hard to investigate. This has certainly been my own experience. It’s hard to think up examples, and it’s almost impossible to search corpora for them, except for the most conventionalised of cases. They seem to be rare and somewhat precarious in real life conversation, and when you do try to test them out, people seem to have very varying degrees of sensitivity to them.
This could give pause for thought and suggest that maybe they are not a distinct pragmatic phenomenon after all. However, perhaps it’s not surprising that they don’t lend themselves to the normal tools of experimental pragmatics, like acceptability judgement tasks and picture matching tasks, which tend to rely on participants’ intuitions about isolated utterances. Depending, as they do, on the speaker’s choice of words and grammar – on how she communicates her meaning, not just what she communicates – they may rely on a greater degree of knowledge of language and its conventional use, or at least on a greater degree of confidence in that knowledge. They are likely to be extremely variable depending on the linguistic context: is it formal? is there jargon? is the speaker a native or second language learner? does the speaker have their own unusual style? They are likely to be cued with intonation or hedges or discourse markers (“Well, he constructed a pile of bricks and mortar”). This means that in an unnatural experimental context (like choosing a picture that matches what an utterance), participants may not be confident enough about any inference they do make, or that they don’t make any manner inference without those extra cues and information about the speaker.
I’ve found some evidence that some adults are sensitive to some manner implicatures, but I’ve no show-stopping conclusions yet. So if you think you’ve made a manner inference recently, then do give me a shout!