An RSA model of free choice

Let's continue. I'm going to present a new (?) model of free choice. Free choice is the phenomenon that a disjunction embedded in a possibility modal conveys the possibility of both disjuncts. 'You may have tea or coffee', for example, conveys that you may have tea and you may have coffee. Champollion, Alsop, and Grosu (2019) present an RSA model of this effect, drawing on the "lexical uncertainty" account from Bergen, Levy, and Goodman (2016). I'll present a model that does not rely on lexical uncertainty.

1.Overview

Like many other contemporary accounts of free choice, mine is inspired by Kratzer and Shimoyama (2002), who pointed out that the inference might be a higher-order implicature. A hearer might reason as follows.

The speaker said '◇(A or B)'. Saying '◇A' would have implicated that ¬◇B. So this would have been a good choice if the speaker knew that ◇A and ¬◇B. Similarly, '◇B' would have been a good choice if the speaker knew that ◇B and ¬◇A. Since the speaker didn't choose '◇A' or '◇B', she doesn't know that ◇A and ¬◇B, and she doesn't know that ◇B and ¬◇A. Given that she is well informed, it follows that either ◇A or ◇B are both true, or they are both false. The latter is incompatible with what the speaker said. So ◇A and ◇B are both true.

It sounds straightforward. But if you try to implement it, some problems emerge.

First, we need to ensure that '◇A' implicates ¬◇B and '◇B' ¬◇A. Second, we then need to explain why '◇(A or B)' is a reasonable choice to convey that ◇A and ◇B.

To see the second problem, note that the above reasoning seems to go through just as well for plain disjunctions 'A or B', where it would show (falsely) that 'A or B' implicates 'A and B':

The speaker said 'A or B'. 'A' would have implicated ¬B, 'B' would have implicated ¬A. So the speaker doesn't know A and ¬B, and she doesn't know B and ¬A. Given that she is well-informed, either A and B are both true or they are both false. The latter is incompatible with what the speaker said. So A and B are both true.

Any account of free choice must explain why '◇(A or B)' is interpreted as '◇A and ◇B' but 'A or B' is not interpreted as 'A and B'. What makes the difference?

Danny Fox has argued that free-choice effects arise iff the relevant disjunctive statement lacks a conjunctive alternative. (See Fox (2007), Singh et al. (2016), Fox and Katzir (2021).) According to popular ways of defining alternatives (e.g., Fox and Katzir (2011)), a plain disjunction 'A or B' has 'A and B' as an alternative, but a modalized disjunction '◇(A or B)' does not have '◇A and ◇B' as an alternative. This could explain the difference.

Let's assume that alternatives enter the picture in the way I described in the final section of this post: When a hearer encounters an utterance U, he wonders why the speaker chose U rather than some alternative to U. Concretely, a hearer who receives the message 'A or B' wonders why the speaker didn't utter 'A and B', but a hearer who receives '◇(A or B)' only wonders why the speaker didn't utter '◇A' or '◇B' or '◇(A and B)'; she doesn't even consider '◇A and ◇B'.

Unfortunately, the fixed-alternatives approach makes the first problem worse. Why would '◇A' implicate ¬◇B? In this post, I showed that 'We have apple juice' can convey 'We don't have orange juice', due to its competition with 'We have apple and orange juice'. We could similarly predict that '◇A' can convey ¬◇B due to its competition with '◇A and ◇B'. But '◇A and ◇B' is not an alternative to '◇A'! If hearers only consider genuine alternatives to the chosen utterance, we need a different story of why '◇A' implicates ¬◇B.

We also still face a version of the second problem. If all goes well, we can show that a sufficiently high-level speaker would use '◇(A or B)' to convey ◇A and ◇B. But what about a lower-level speaker? According to standard modal semantics, which I here take for granted, '◇(A or B)' is equivalent to '◇A or ◇B'. At levels before '◇A' is pragmatically strengthened to mean '◇A and ¬◇B', a well-informed speaker would always prefer '◇A' or '◇B' to '◇(A or B)', just as a well-informed speaker would always prefer 'A' or 'B' to 'A or B'. A low-level hearer who knows that the speaker is well-informed can therefore be sure that '◇(A or B)' won't be uttered. But then a higher-level speaker can't figure out how the hearer would respond to '◇(A or B)', so it's unclear how '◇(A or B)' would ever become a sensible choice.

At this point, Franke stipulates that "surprise utterances" like '◇(A or B)' convey nothing at all: the hearer responds by retaining his prior credence. (See the previous post.) A more natural response in the RSA framework would assume that speakers sometimes fail to choose the optimal act. If we set a low-level speaker's soft-max parameter (alpha) to a finite value, they sometimes say '◇(A or B)', even though '◇A' or '◇B' would have greater expected utility. It turns out, however, that you then can't predict the free-choice effect.

Champollion, Alsop, and Grosu (2019) solve both problems by assuming that '◇A' can be strengthened to '◇A and ¬◇B' already at level 0. For a level-1 speaker who knows that ◇A and ◇B, '◇(A or B)' is then already a better choice than either '◇A' or '◇B'.

The supposed level-0 strengthening from '◇A' to '◇A and ¬◇B' evidently can't arise as a pragmatic inference. Instead, Champollion et al assume that '◇A' has two literal meanings: its standard unstrengthened meaning, and an "exhaustified" meaning on which it is equivalent to '◇A and ¬◇B'.

In the model below, I will assume that '◇A' has only its standard, unstrengthened meaning. So I need a new answer to the two problems: (1) how does '◇A' convey ¬◇B? (2) how does a low-level hearer make sense of '◇(A or B)'?

Start with the first problem. I need to explain why '◇A' conveys ¬◇B, even though the hearer only considers the alternative '◇B'. My tentative answer is that when a hearer encounters an utterance U, he infers not only that U is among the best of its alternatives, but that U is uniquely and robustly best.

To motivate this, suppose tea and coffee are both allowed, and equally relevant. (Perhaps you just asked whether you can have tea or coffee.) In this context, there is something wrong with uttering 'you may have tea'. Why single out the tea? You could just as well have said 'you may have coffee'. The arbitrariness is objectionable. If I say 'you may have tea', you would assume that there's a positive reason to choose this utterance from among its alternatives.

The second problem is to explain how a low-level hearer would interpret '◇(A or B)', which is semantically equivalent to '◇A or ◇B'. I'm going to adopt the obvious response of allowing for imperfectly informed speakers. Before the pragmatic strengthening of '◇A' and '◇B', a fully informed speaker would never utter '◇(A or B)'. But a partially informed speaker might. At lower levels, '◇(A or B)' will convey uncertainty about whether only ◇A or only ◇B or both. That's because '◇(A or B)' competes with '◇A' and '◇B', whose literal meanings are stronger. Observing an utterance of '◇(A or B)', a hearer can infer that the speaker is not in a position to utter '◇A' or '◇B': she knows neither ◇A nor ◇B. But she does know ◇A or ◇B, by the literal meaning of what she said. So her information is compatible with (i) only ◇A and (ii) only ◇B and (iii) both ◇A and ◇B.

2.The simulation

Let's begin by defining the relevant states, the available utterances, and their meanings. We also define the alternatives for each utterance.

For simplicity, I only distinguish four states, depending on which of A and B are allowed. (I'm not considering the further question whether their conjunction is allowed.)

var states = Cross('MA', 'MB');
var meanings = {
    'may A': function(state) { return state['MA'] },
    'may B': function(state) { return state['MB'] },
    'may A or B': function(state) { return state['MB'] || state['MA'] },
    'may A and may B': function(state) { return state['MB'] && state['MA'] },
    '-': function(state) { return true }
}

var alternatives = function(u) {
    // s is an alternative to u iff s doesn't have more words
    return filter(function(s) {
        return numWords(s) <= numWords(u)
    }, keys(meanings));
}

Speakers work as usual. At any level, the speaker compares all available options by their length and by the expected hearer accuracy they would bring about.

// continues #1
var state_prior = Indifferent(states);
var makeSpeaker = function(hearer) {
    return function(observation, alternatives) {
        return Agent({
            options: alternatives || keys(meanings),
            credence: update(state_prior, observation),
            utility: function(u,s){
                return marginalize(learn(hearer, u), 'state').score(s) - cost(u);
            }
        });
    }
};
var cost = function(u) {
    return u == '-' ? 10 : u.length/20; 
};

Hearers beyond level 1 conditionalize on the assumption that the observed utterance was uniquely and clearly the best option. I've implemented this by defining a bestOption function in the webppl-rsa package. If a is an agent, then bestOption(a) checks if there is a uniquely and clearly best option for a, and returns it. (Internally, the agent is construed as soft-maxing with low alpha, and an option is "uniquely and clearly best" if it is emerges as substantially more likely than the next best option.)

Hearers are unsure both about the state (what is permitted) and about the speaker's information. Initially, they think that the hearer is most likely to be entirely informed or entirely uninformed.

// continues #2
var access_prior = { 'full': 0.45, 'partial': 0.05, 'none': 0.5 };
var makeHearer = function(speaker) {
    return Agent({
        credence: join({
            'state': state_prior,
            'access': access_prior
        }),
        kinematics: function(utterance) {
            return speaker ? function(s) {
                var obs = evaluate(get_observation[s.access], s.state)
                var sp = speaker(obs, alternatives(utterance))
                return bestOption(sp) == utterance;
            } : function(s) {
                return evaluate(meanings[utterance], s.state);
            };
        }
    });
};
var get_observation = {
    'full': function(state) { return state },
    'partial': function(state) {
        // return uniform distribution over all partial observations compatible with state
        var observations = filter(function(obs) {
            obs.includes(state) && obs.length > 1 && obs.length < states.length
        }, powerset(states));
        return uniformDraw(observations);
    },
    'none': function(state) { return states }
};

That's all.

Let's initialize the level-0 hearer and the level-1 speaker, and see how they behave.

// continues #3
var hearer0 = makeHearer();
var speaker1 = makeSpeaker(hearer0);
var info_ma_and_mb = [{ MA: true, MB: true }];
var info_ma_or_mb = [{ MA: true, MB: false}, {MA:false, MB:true}, {MA:true, MB:true}];
var info_ma = [{ MA: true, MB: false}, {MA:true, MB:true}];
showBestOption(speaker1, [info_ma_and_mb, info_ma_or_mb, info_ma]);

Here we consider three information states. If the level-1 speaker knows that A and B are both allowed, then 'may A and may B' is the uniquely best option. If she only knows that at least one of A and B is allowed, 'may A or B' is optimal. If she knows that A is allowed and lacks information about B, 'may A' is optimal.

Here is the level-2 hearer:

// continues #4
var hearer2 = makeHearer(speaker1);
showKinematics(hearer2, ['may A or B', 'may A and may B', 'may A']);

The hearer interprets 'may A or B' as signalling incomplete information. He interprets 'may A' as (weakly) indicating that B is not allowed, because his prior disfavours partially informed speakers.

On to level 3:

// continues #5
var speaker3 = makeSpeaker(hearer2);
showBestOption(speaker3, [info_ma_and_mb, info_ma_or_mb, info_ma]);

The level-3 speaker still chooses 'may A and may B' if she knows that A and B are both allowed. She still chooses 'may A or B' if she only knows that at least one of A and B is allowed. If she knows that A is allowed and lacks information about B, she now prefers 'may A or B', because 'may A' would implicate that B is not allowed, and she doesn't know if this is true.

At level 4, we get the free choice effect:

// continues #6
var hearer4 = makeHearer(speaker3);
display("hearer4 hears 'may A or B'");
viz.table(learn(hearer4, 'may A or B'));

Upon hearing 'may A or B', the level-4 hearer considers in what information state 'may A or B' would have been the robustly best option among its alternatives, for the level-3 speaker. The alternatives are 'may A', 'may B', and 'may A or B'. Have a look at the decision matrix for speaker3 in a situation where he knows that A and B are both allowed:

// continues #6
showDecisionMatrix(speaker3(info_ma_and_mb)); 

'May A or B' is clearly best among its alternatives. So the level-4 hearer can see two possible explanations for why the speaker chose 'may A or B': either the speaker is incompletely informed (as before), or the speaker knows that A and B are both allowed. (This second possibility did not exist at level 2. A level-1 speaker who knows that A and B are both allowed would prefer 'may A' and 'may B' over 'may A or B', and none of the alternatives to 'may A or B' would be robustly optimal.)

We should also check that a level-5 speaker would utter 'may A or B', if she knows that A and B are both allowed. At this stage, 'may A and may B' still leads to greater hearer accuracy (as it has no "speaker is ignorant" interpretation). The speaker prefers 'may A or B' due to its comparative simplicity:

// continues #7
var speaker5 = makeSpeaker(hearer4);
showChoices(speaker5, [info_ma_and_mb]);

3.Three disjuncts

Franke's model does not generalize to cases with three disjuncts. The above model does:

var states = Cross('MA', 'MB', 'MC');
var meanings = {
    'may A': function(state) { return state['MA'] },
    'may B': function(state) { return state['MB'] },
    'may C': function(state) { return state['MC'] },
    'may A or B': function(state) { return state['MB'] || state['MA'] },
    'may A or C': function(state) { return state['MA'] || state['MC'] },
    'may B or C': function(state) { return state['MB'] || state['MC'] },
    'may A and may B': function(state) { return state['MB'] && state['MA'] },
    'may A and may C': function(state) { return state['MA'] && state['MC'] },
    'may B and may C': function(state) { return state['MB'] && state['MC'] },
    'may A or B or C': function(state) { return state['MB'] || state['MA'] || state['MC'] },
    'may none': function(state) { return !state['MB'] && !state['MA'] && !state['MC']},
    'may A and may B and may C': function(state) { return state['MB'] && state['MA'] && state['MC'] },
    '-': function(state) { return true }
}
var alternatives = function(u) {
    // s is an alternative to u iff s doesn't have more words
    return filter(function(s) {
        return numWords(s) <= numWords(u)
    }, keys(meanings));
}

var state_prior = Indifferent(states);
var makeSpeaker = function(hearer) {
    return cache(function(observation, alternatives) {
        return Agent({
            options: alternatives || keys(meanings),
            credence: update(state_prior, observation),
            utility: function(u,s){
                return marginalize(learn(hearer, u), 'state').score(s) - cost(u);
            }
        });
    });
};
var cost = function(u) {
    return u == '-' ? 10 : u.length/50; 
};

var access_prior = {
  'full': 0.42,
  'full_MA': 0.02,
  'full_MB': 0.02,
  'full_MC': 0.02,
  'partial': 0.02,
  'none': 0.5
};
var observationDist = function(match) {
    return function(state) {
        var observations = filter(function(obs) {
            obs.includes(state) && obs.length > 1 && obs.length < states.length
        }, powerset(match ? filter(function(s){ s[match] == state[match] }, states) : states));
        return uniformDraw(observations);
    };
};
var get_observation = {
    'full': function(state) { return state },
    'full_MA': observationDist('MA'),
    'full_MB': observationDist('MB'),
    'full_MC': observationDist('MC'),
    'partial': observationDist(),
    'none': function(state) { return states }
};
var makeHearer = function(speaker) {
    return Agent({
        credence: join({
            'state': state_prior,
            'access': access_prior
        }),
        kinematics: function(utterance) {
            return speaker ? function(s) {
                var obs = evaluate(get_observation[s.access], s.state)
                var sp = speaker(obs, alternatives(utterance))
                return bestOption(sp) == utterance;
            } : function(s) {
                return evaluate(meanings[utterance], s.state);
            };
        }
    });
};

var hearer0 = makeHearer();
var speaker1 = makeSpeaker(hearer0);
var hearer2 = makeHearer(speaker1);
var speaker3 = makeSpeaker(hearer2);
var hearer4 = makeHearer(speaker3);

display("hearer4 hears 'may A or B or C':");
viz.table(learn(hearer4, 'may A or B or C'));

I'm now distinguishing six possibilities about the speaker's access to the state:

  1. The speaker has full access to what is permitted.
  2. The speaker knows whether A is permitted and has incomplete (or no) information about B and C.
  3. The speaker knows whether B is permitted and has incomplete (or no) information about A and C.
  4. The speaker knows whether C is permitted and has incomplete (or no) information about A and B.
  5. The speaker has some other incomplete information.
  6. The speaker has no information.

For the effect to arise, hearers must consider possibility 1 to be more likely than 2-5, and they must not consider 5 to be much more likely than 2-4. In other words, the hearer must assume that if the speaker has any information at all, then they probably have full information, or at least full information about one of the disjuncts.

This assumption is needed to ensure that 'May A or B' conveys 'Not May C'. For suppose a speaker knows that at least one of A and B is allowed, and has no information about C. Then she is in a position to assert 'May A or B', but not 'May A or C' or 'May B or C'. So 'May A or B' is the uniquely best option. If a hearer gives high probability to encountering such a speaker, he won't see 'May A or B' as indicating 'Not May C'.

4.Room for improvements

The model is not as fragile as Franke's, but it is still not as robust as one might like. I had to hard-code quite specific assumptions about the speaker's access. Ideally, we would be able to derive these by some pragmatic mechanism. The effect also depends on a relatively specific cost function. Worse, the free-choice effect does not become stronger at levels beyond 4, as one might hope. I'm also not entirely happy about the derivation of the inference from '◇A' to ¬◇B, based on the "non-arbitrariness" requirement.

I suspect that most of these problems could be avoided if we relaxed the fixed-alternatives approach, perhaps in favour of a model with uncertainty about the costs, as I explained here.

Bergen, Leon, Roger Levy, and Noah Goodman. 2016. “Pragmatic Reasoning Through Semantic Inference.” Semantics and Pragmatics 9: ACCESS–. doi.org/10.3765/sp.9.20.
Champollion, Lucas, Anna Alsop, and Ioana Grosu. 2019. “Free Choice Disjunction as a Rational Speech Act.” Semantics and Linguistic Theory, 238–57. doi.org/10.3765/salt.v29i0.4608.
Fox, Danny. 2007. “Free Choice and the Theory of Scalar Implicatures.” In Presupposition and Implicature in Compositional Semantics, edited by U. Sauerland and P. Stateva, 71–120. Basingstoke: Palgrave Macmillan.
Fox, Danny, and Roni Katzir. 2011. “On the Characterization of Alternatives.” Natural Language Semantics 19: 87–107.
Fox, Danny, and Roni Katzir. 2021. “Notes on Iterated Rationality Models of Scalar Implicatures.” Journal of Semantics 38 (4): 571–600. doi.org/10.1093/jos/ffab015.
Kratzer, Angelika, and Junko Shimoyama. 2002. “Indeterminate Pronouns: The View from Japanese.” In Proceedings of the 3rd Tokyo Conference on Psycholinguistics, 1–25. Tokyo: Hituzi Syobo.
Singh, Raj, Ken Wexler, Andrea Astle-Rahim, Deepthi Kamawar, and Danny Fox. 2016. “Children Interpret Disjunction as Conjunction: Consequences for Theories of Implicature and Child Development.” Natural Language Semantics 24 (4): 305–52. doi.org/10.1007/s11050-016-9126-3.

Comments

No comments yet.

Add a comment

Please leave these fields blank (spam trap):

No HTML please.
You can edit this comment until 30 minutes after posting.