**Stage 1:** Player 1 learns *b* and *c;*
**Stage 2:** Player1 promises or does not promise to refrain from X;
**Stage 3:** Player 1 learns *d.*
**Stage 4:*** *Player 1* *does or does not do X.
Player 1 receives
_{A}* b *P_{2 }[1 refrains from X | 1 promises] *+ *_{X}* (d – c),*
where **_{A} equals 0 or 1 if 1 does or does not apologize, and **_{X} equals 0 or 1 if he refrains from X or does X, respectively. Player 2 has no moves and is assigned no payoffs, but holds a belief P_{2} about 1’s reliability. Note that the probability in 1’s payoff is that held by 2. It is not conditional on *b*, *c *or* d,* since 2 does not know these. Player 1 can infer this belief exactly and so it enters 1’s payoff function. A subgame perfect Bayesian equilibrium will require consistency between 1’s moves and payoffs as well as between 1’s and 2’s beliefs.
Generically the game has exactly one equilibrium: at Stage 2 Player 1 makes no promise, and Player 2, whether he hears a promise or not, believes with probability 1 that Player 1 will do X. At Stage 4, of course, Player 1 simply optimizes, “generically” taking the action X only if *d – c > *0. (“Generically” means that the claim is true except for a set of situations with probability 0. Player 1 types with *c *exactly* *0,* *for example, will have nothing to lose and will be ready to make or not make a promise, but other types will want to refrain.)
The equilibrium result can be understood by assuming, contrary to fact, that an equilibrium exists where Player 1 promises with non-zero probability. Suppose 2’s probability that a promise will be kept is *T*. It is easy to show that making a promise gives 1 an expectation of *bT + *(1 -* c)*^{2}*/*2* *while not promising gives 1 an expectation of 1/2*. *If *T = 0* then promising would be suboptimal for all 1-types with *c > 0*, contrary to the assumption. Thus *T > *0*,* and Player 1 will promise if *b > *[1 - (1* - c*)^{2}]/2*T. *As a function of *c*, this curve starts at *b = *0* *and rises, so the more likely someone is to make a promise, the less likely they are to keep it. A calculation of the likelihood of keeping the promise for each value hypothesized value of *T* shows that there is no fixed point in T except *T *= 0.
The players’ problem is like Akerlof’s “market for lemons” where the supposition that a salesman acceptance of your offer would tell you that the car is probably not worth it, so no deal can be made. Here the fact that a party is willing to make a promise implies that they will probably not keep it.
*Apologizing as promising with a show of remorse*
One hypothesis is that an apology has its multiple features in part because these mitigate the unfortunate selection effect of voluntary promises. An example is the expression of remorse. One cannot apologize in a deadpan, and like many emotions, remorse is associated with physical displays that are hard to counterfeit. We can model this by postulating that when 1 apologizes he shows a degree of remorse commensurate with his value of *c*. With a certain probability, here taken as 1/2, Player 2 is able to discern the value of *c* from 1’s display and thereby accurately assess the degree of reliability of 1’s promise. However, with probability 1/2 Player 2 fails to make a reliability judgment at all, and knows that, and uses only the fact that 1 made the apology as a basis for assessing whether 1 will keep it. This is called the
**Share with your friends:** |