Game theory is a branch of mathematics, which was formed to study the structure and declaration of conflict. As such, it has the two goals of categorizing types of conflicts according to systemic criterion and offering suggestions for best behavior to the parties engaged in conflict. The theory has been intended to treat thoroughly individual, group, and institutional decision tasks in which there subsist elements of conflict and cooperation amongst the parties, in the context of choices among decision alternatives whose resulting outcomes are well specified. The characteristic quality of game theoretic decision tasks is that the upshot to a particular participant, referred to by game theory as a player, depends not simply on his own choices and the vagaries of chance elements, but also on the choices of one or more of the other players. The interests of the players might be completely opposed, totally coincident, or, more characteristically, partially opposed and partially coincident. The outcomes to every player, although well specified, are necessarily uncertain, as the choices of the other players, as well as the vicissitudes of chance, are not known with sureness. Though game theory easily treats those situations in which the interests of the players are completely coincident, the interesting aspects of the theory are in its applications to circumstances in which there is an element of conflict.
McMillan says: “Stripped of its mathematics and jargon, it can be helpful to people in administrative situations faced with important decisions and strategy,” and the opening sentence of his book asks: “What is Game Theory about, and how can it be used in making decisions?” Dixit and Nalebuff write: “It is better to be a good strategist than a bad one, and this book aims to help you improve your skills at discovering and using effective business strategies,” “Our aim is to improve your strategy IQ,” and “Our premise in writing this book is that readers from a variety of backgrounds can become better strategists if they know these principles.”
The exceptional success of Game Theory and the perception that it may “improve strategy IQ” are linked, to the rhetoric of Game Theory.
In economic theory, and Game Theory particularly, interpretation is an essential ingredient of any model. A game differs according to whether the players are human beings, bees, or diverse “selves” of the same person. A business strategic game changes entirely when payoffs are switched from utility numbers representing von Neumann and Morgenstern preferences to sums of money or to procedures of evolutionary fitness.
The appraisal of game theory rhetoric is also significant from the public’s point of view. The public’s interest in game theory is at least partially the result of efforts on the part of academic economists to underline the practical value of Game Theory as, for example, a guide to policy makers. Consultants use the professional language of Game Theory in their arguments and it is in the public interest that these arguments be correctly understood so that the oratory does not deform their real content.
In fact, McMillan’s book is full of reservations on the direct usefulness of Game Theory: “Game Theory does not profess to tell managers how to run their business”; “Game Theory does not eradicate the need for the knowledge and intuition acquired through long experience”; “Game Theory offers a short cut to understanding the principles of business strategic decision making. Skilled and experienced managers understand these principles intuitively, but not essentially in such a way that they can communicate their understanding to others”; and “Game Theory, then, is a limited but influential aid to understanding strategic interactions.” Dixit and Nalebuff (despite their statement “Don’t compete without it”) state that “in some ways strategic thinking remains an art.”
The significance of game theory to the social scientific study of conflict of interest is self-evident, though the value of its contribution is not. The theory has been developed, presented, discussed, construed, criticized, and applied in myriads of articles and books. It has been salute by some as one of the most outstanding scientific achievements of our century and has been destined by others as behaviorally and socially of little or no relevance, or worse, as an insidious influence.
But heated arguments concerning relevance can be avoided if, it is kept clear while the discussion treats the underlying “real” situation and when it addresses the model itself. The suitability of the assumptions may be judged to determine whether or not the model captures the relevant aspects of the situation. No conflict situation exists outside of a surrounding context, yet all formal models strip away large parts of that framework so as to make the problem tractable. In this generalization, there is the expected choice of which sets of factors of the conflict to keep and which to discard. And any such decision is a reasonable one, which weighs the need for generalization in order to obtain concrete results against the risk of obtaining spurious results as the wrong factors were retained.
Before getting on a short history of game theory, two caveats are warranted in order to drive out some common misperceptions. Both concern the very name “game theory.” First, the theory of games is an anthology of formal models for studying decision making in conflict situations that are most simply exemplified as games of business strategy. Hence its name. Though the analogy between games and other conflicts is often helpful, the word “game” unfortunately carries with it adverse connotations of frivolity and irrelevance ( Shubik, 1964). But wars, political imbroglios, economic battles, and family disputes are not games in the entertaining sense: they are not entered into for purposes of amusement, they are not easily and enthusiastically amenable to dispassionate investigation by the concerned parties, and their players do not always enter the game willingly. It is significant, therefore, to recognize the special meaning of “games of business strategy.”
The first caveat having concerned likely misunderstandings about the word “game,” the second concerns the word “theory.” The term “game theory” is really a misnomer, as there are multiplicities of theories that correctly fall under that rubric. “Game Theory” as first presented by von Neumann and Morgenstern in 1944 was based on a number of suppositions about the nature of the logic pursued by players, both individually and cooperatively. Later, these original assumptions were improved by alternative patterns leading to new theories. The term “game theory” is assigned to the collectivity, which is the whole body of mathematical research within the general context laid down by von Neumann and Morgenstern. Additionally, applications of the abstract mathematical formulations to the behavioral and social sciences have been often subsumed under the general rubric.
This diversity of theories under a single title is not a weakness; rather it is specifically this variety that supplies much of the interest of what is intrinsically an exercise in mathematical logic for the student of human behavior. As Lucas ( 1972) puts it,
. . . This multiplicity of resolutions is for the most part inherent to the problems, and shows the richness of the theory due to the variety of possible human behavior in a multi-polar world in which concepts like ‘fairness’ and ‘most preferred’ mean different things to different people. It is very improbable that all groups of people will accept any one well-defined solution concept as being generally applicable [p. 16].
Analogous circumstances subsist in many areas of psychology that have proved agreeable to mathematical formalization, for example, probability learning, paired-associate learning, concept pattern ( Atkinson, Bower, & Crothers, 1965), individual decision making under uncertainty ( Luce & Suppes, 1965), and choice in the 2-person Prisoner’s Dilemma game ( Rapoport ; Chammah, 1965). In all of these examples, behavior in a single comparatively simple experimental paradigm may be abstracted by models that are different from one another in their assumptions concerning information processing, response mechanisms, or the subject’s objectives. With respect to game theory, the diversity of independently minded decision makers, various possible norms of fairness and sensibleness, and difficulties in capturing the various aspects of prudence within a single definition lead to this propagation of theories.
The history of game theory might have instigated at the beginning of the Eighteenth Century. At that time, a new political order was recognized in Europe, founded on the balance of power linking the leading European states. The new concepts of “balance of power” and “political equilibrium” spread fast in the community of intellectuals, became common phrases, and were invoked to elucidate and justify many different political arrangements, particularly, the formation and dissolution of alliances among nations. Moreover, Lossky ( 1970) states: “the early eighteenth century certainly tended to work out the balance of power in precise mathematical terms, for which the new science of ‘political arithmetick’ already supplied a warrant [p. 156].” It is not surprising then that the origins of game theory have been accredited ( Morgenstern, 1968) to Leibnitz, who as early as 1710 foresaw the need and possibility of a theory of games of business strategy, and to Waldegrave, who two years after Leibnitz first originated a primitive form of the minimax strategy. From the end of the Nineteenth Century, contributions to game theory became more frequent. Edgeworth ( 1881) noted the similarity between games of business strategy and assured economic processes, and Zermelo ( 1913) stated focused theorems for certain games. Borel ( 1921) provided a clear statement of an significant class of game theoretic problems, and introduced several of the major concepts of game theory, but conjectured that the minimax theorem was false in the general case. But von Neumann ( 1928) proved the basic minimax theorem and thereby laid the cornerstone of the edifice of game theory as we know it.
Game theory symbolizes an abstract model of decision making, not the social realism of decision making itself. Therefore, while game theory makes certain that a result follows rationally from a model, it cannot ensure that the effect itself represents reality, except in so far as the model is a precise one. To explain this model accurately requires practitioners to share a common language which, to the inexperienced, might seem excessively technical. This is inevitable. Since game theory represents the interface of mathematics and management, it should of necessity accept a terminology that is familiar to both.
The fundamental constituents of any game are its participating, independent decision makers, called players. Players can be individual persons, organizations or, in some cases, nature itself. While nature is designated as one of the players, it is assumed that it moves without favor and according to the laws of chance. In the terms of game theory, nature is not ‘counted’ as one of the players. So, for instance, when a deck of cards is shuffled prior to a game of solitaire, nature–the second player–is making the Wrst move in what is a ‘one-player’ game. This is essentially different from chess for example, where nature takes no part primarily or subsequently.
A game should have two or more players, one of which may be nature. The total number of players may be large, but should be finite and must be known. Each player must have more than one choice, as a player with only one way of selecting can have no strategy and then cannot alter the outcome of a game.
An outcome is the result of a entire set of business strategic selections by all the players in a game and it is assumed that players have reliable preferences among the possibilities. Furthermore, it is assumed that individuals are competent of arranging these possible outcomes in several order of preference. If a player is indifferent to the difference between two or more outcomes, then those outcomes are dispersed equal rank. Based on this order of preference, it is probable to assign numeric pay-offs to all possible outcomes. In some games, an ordinal scale is adequate, but in others, it is necessary to have interval scales where preferences are set out in comparative terms. For example, a pay-off of six should be three times more enviable than a pay-off of two.
A pure business strategy for a player is a operation plan for the entire game, stipulating in advance what the player will do in comeback to every eventuality. If a player selects a strategy without knowing which strategies were selected by the other players, then the player’s pure strategies are purely equivalent to his or her choices. If, conversely, a player’s strategy is chosen subsequent to those of other players and perceptive what they were, then there will be more wholesome strategies than choices. For instance, in the case of the union dispute, management has two choices and two pure strategies: give in or refuse. Though, the union’s strategic selection is prepared after management’s strategic selection and in full knowledge of it, so their pure strategies are advance statements of what the union will choose in response to each of management’s selections.
As one of the fundamental terms in Game Theory is “strategy.” What is a strategy in ordinary language? McMillan (1992) teaches us that the source for the word “strategy” is the Greek word for the leader of an army, which is rather diverse from its usage today. Webster’s dictionary defines the word as “a method for making or doing something or attaining an end,” and the Oxford English Dictionary defines the word as “a general plan of action.” We talk concerning the strategy to win a war, the strategy of managers to make profits or evade being fired, and the strategy to survive a hurricane.
The informal definition of strategy according to game theorists is not far from the one in daily language. Martin Shubik refers to a strategy as “a entire description of how a player intends to play a game, from start to end.” Jim Friedman defines it as “a set of instructions. ” And John McMillan defines strategy as “a requirement of actions covering all possible eventualities. ” Thus, it is suitable to ask whether the formal definition of the term “strategy” in Game Theory is similar to its daily meaning. If it is not, then Game Theory’s value is put into serious doubt; and even if it is, there is still a long way away to go before we can be convinced that game-theoretic results can teach something to managers.
In an extensive game, a player’s strategy is requisite to specify an action for each node in the game tree at which the player has to move. Therefore, a player has to specify an action for every succession of events which is constant with the rules of the game.
According to the natural definition of strategy as a “complete plan of action, ” player 1 is requisite to specify his behavior, “Continue” (C) or “Stop” (S), at the first node and, if he plans to “Continue, ” to make temporary plans for his second decision node in the event that player 2 chooses C. on the contrary, the game-theoretic definition of strategy requires player 1 to identify his action at the second decision node, even if he plans to “Stop” the game at the first node. Here, as in any game which needs a player to make as a minimum two consecutive moves (and the majority of the games which have been analyzed in economic theory fall into this category), a strategy should specify the player’s actions even after histories which are contradictory with his own strategy.
Why does the idea of business strategy used by game theorists differ from a “plan of action”? If we were investigating only Nash equilibrium of general games, then the game-theoretic definition would certainly be unnecessarily broad. The broad definition is, however, necessary for testing the reasonableness of a player’s plan, both at the beginning of the game and at the point where he must regard as the prospect of responding to an opponent’s impending deviation (the sub game perfect idea). Returning to the game form above, presume that each player plans to choose “Stop” at his first decision node. Testing the optimality of player 2’s plan following player 1’s divergence needs that player 2 specify his expectations concerning player 1’s plan at his second decision node. The specification of player 1’s action after both players have chosen C provides these expectations and has to be construed as what would be player 2’s (as opposed to player 1’s) belief concerning player 1’s planned future play should player 1 choose to deviate from what was believed to be his original plan of action. Therefore, a strategy encompasses not simply the player’s plan but also his opponents’ expectations in the event that he does not follow that plan. Hence, an equilibrium strategy describes a player’s plan of action as well as those deliberations which support the optimality of his plan (i.e. preconceived ideas concerning the other players’ plans) rather than being simply a description of a “plan of action.” A profile of strategies provides a full analysis of the situation and not simply a tuple of plans of actions.
The naive understanding of “mixed strategy” requires that a player use a roulette wheel or some other arbitrary device to decide upon an action. Normally, we are hesitant to believe that our decisions are made at random; we desire to believe that there is a reason for each action we take. The majority game theorists never considered using random devices as part of a strategy before being exposed to Game Theory. Indeed, the concept of mixed strategy has been a aim for criticism. To quote Aumann (1987): “Mixed strategy equilibrium have always been instinctively problematic” and Radner and Rosenthal (1982): “One of the motives why game theoretic ideas have not found more extensive application is that randomization, which plays a main role in Game Theory, seems to have limited plea in many practical situations. ”
There are perceptibly cases in which players do choose random actions. When a principal (for instance, an employer or the government) monitors his agents (employees or citizens), often simply a few are monitored and they are chosen randomly. Of course, this is not a “mixed strategy.” The stochastic rule used for auditing is in fact a pure strategy. The principal is not apathetic between monitoring no agents and monitoring all of them. His “pure strategy” is the restrictions of the randomization to be used. As an additional example, a parent who should allocate one piece of candy to one of his two children may severely prefer to flip a coin. Though, this flip of a coin is not a “mixed strategy” either. Characteristically, the parent would not be apathetic between procedures for allocating the candy. An allocation procedure must be regarded as a pure strategy even if it entails flipping a coin.
The game-theoretic literature proposes a few more interpretations of this concept. The game is played occasionally between players, every drawn from a population harmonized in its interests but heterogeneous in its actions. The mixed strategy describes the distribution of actions taken by the agents who are strained from the population attached to any given role in the game. By a diverse interpretation, a mixed strategy is conceived of as a plan of action which is reliant on private information not specified in the model. According to this interpretation, a player’s behavior is in fact deterministic although it appears to be random. If we add this information structure to the model, the mixed strategy becomes a pure one in which actions depends on inappropriate information. This understanding is problematic since it is not clear why people would base their behavior on factors which are obviously irrelevant to the situation.
There is an extra interpretation which has gained popularity throughout the past decade. According to this interpretation, a mixed strategy is the belief held by all other players relating to a player’s actions (Aumann, 1987). Mixed strategy equilibrium is then an n-tuple of common knowledge prospect; it has the property that, given beliefs, all actions with a severely positive probability are optimal. Thus, the uncertainties following the mixed strategy equilibrium are viewed as an expression of the lack of confidence on the part of the other players rather than an deliberate plan of the individual player.
It would appear that two mid versions of the game theoretic notion of a “strategy”—i.e. an wide form strategy and a mixed strategy in strategic games are more suitably interpreted as beliefs rather than plans of actions. Though, if strategies are beliefs rather than plans of actions, the whole manner of speaking in Game Theory has to be reassessed.
As a revelation of the difficulties which follow from the beliefs interpretation, think the sequential games literature in which authors sometimes presume that strategies are stationary in the sense that a player’s behavior is autonomous of the history of the game. This literature presents stationarity as an postulation related to simplicity of behavior. For example, consider player 1’s strategy “always play a” in a repeated game. This strategy is simple in the sense that the player plans to make the same move separately of the other players’ actions.
Thus, stationarity in sequential games entails not only simplicity but also passivity of beliefs. This is counter-intuitive, particularly if we assume simplicity of behavior. If player 2 believes that player 1 is forced to choose a stationary plan of action, then player 2 probably must believe (after 17 repetitions of the action b) that player 1 will persist to play b. Thus, by assuming passivity of beliefs, we eradicate a great deal of what sequential games are intended to model i.e. the varying pattern of players’ behavior and beliefs as players accrue experience.
With regard to mixed strategies, the acceptance of the beliefs interpretation requires the reconsideration of much of applied Game Theory. Particularly, it implies that equilibrium does not guide to a statistical prediction of players’ behavior. Any action taken by player i as a best reaction given his expectation about the other players’ strategies is constant as a prediction for i’s future action (this might even comprise actions which are outside the support of the mixed strategy). This renders pointless any comparative static’s or welfare analysis of the mixed strategy equilibrium and takes into question the massive economic literature which utilizes mixed strategy equilibrium.
Further, as game theory suggests that the core of business strategic competition is the interaction amongst players such that the decisions made by any one player are reliant on actual and anticipated decisions of other players. This idea permits the framing of strategic decisions and offers approaching into competition and bargaining. This system can expect the equilibrium outcomes of viable situations and the consequences of strategic moves by any one player. The ongoing tradeoff between cooperation and antagonism within alliances has led to their conceptualization as repeated games. The most often used game scenario is called “prisoner’s dilemma,” which entails two players who can either cooperate with each other or cheat. Depending on their choice of action, their payoffs will be diverse. If two players assist with each other, they have a much greater induces than if they both choose to cheat. If one of them cheats while the other one cooperates, though, then the cheating player receives the highest probable payoff while the supportive player loses. Therefore, two self-interest maximizing players who are aware of their payoff structure will both deceive although they could be better off cooperating. The reason is as no player would knowingly risk cooperating when the chance subsists that the other player would cheat.
Robert Axelrod found that in a prisoner’s predicament scenario, a player can perform exceptionally well if he or she adopts a “tit-for-tat” strategy; that is, if he or she starts the game in good faith by assisting but responds instantaneously to any opportunism by any other player by start to cheat him or herself in the next period. In this way, players signal that they are going to penalize any partner who cheats and revert back to cooperation if the partner chooses to cheat again. Consequently, some management scholars see a parallel between the prisoner’s dilemma game and interfirm alliances and propose that firms adopt a tit-for-tat strategy in their conduct and supremacy of alliances (R. Johnston, and P. R. Lawrence, 1988). Doz, Hamel, and Prahalad, though, made a diverse recommendation by considering the benefits of cheating as escalating over time—especially for the firm that learns more rapidly from the alliance throughout the period of cooperation (Y. Doz, G. Hamel, and C. K. Prahalad, 1989). Consequently, they suggested that the participant must try to learn as fast as possible and then quit the alliance at an advantage. All these suggestions based on the prisoner’s dilemma game involve instant gains in joint venture relationships, whereas there are also joint ventures that last a long time (for example, the Shell Oil Company between Dutch and British partners).
In contrast to a short-term view of prisoner’s dilemma, Gulati and Nohria recommended that “the prisoner’s dilemma framework is particularly unsuited for alliances that entail significant long term commitments on the part of the partners and where partnering firms might indeed gain their highest payoffs, both in the long and short run, by cooperating” (R. Gulati, and N. Nohria, 1992, 133-139). They asserted that by “equally assured alliances,” in which regardless of the opportunity of parties’ short-term benefits in cheating, the parties be inclined to not do so because cheating will damage their reputation in building new alliances either with the accessible parties or with new parties in the future. The researchers claimed that “contrasting the prisoner’s dilemma game, where the leading strategy for each player is to cheat, and a mutual declaration game has two possible outcomes: either both the partners choose to cooperate or both choose to cheat.” (R. Gulati, and N. Nohria, 18) In the case of the mutual assurance game, a player’s most favorable strategy depends upon what he or she supposes of the other player. Hence, unlike the prisoner’s dilemma game, in the joint assurance scenario it might be more beneficial for the players to help and convince the other player that it is in the best interest for him or her. To make sure cooperation in alliances that have a mutual assurance payoff structure, Gulati and Nohria recommended the following managerial actions: “Share complete information with your partners: do not conceal anything; continuously update the alliance: don’t get stuck on an initial plan; be generous and take the long term view: don’t balk at making unilateral commitments; walk through the conflicts: don’t jump to conclusions; get incremental small gains: not just big payoffs; and choose your alliances cautiously: don’t rush into partnerships.” (R. Gulati, and N. Nohria, 1992).
Obviously, Gulati and Nohria’s mutual declaration game takes a different view from those in the prisoner’s dilemma game by underlining cooperation and long-term benefits for the players. In addition, it encourages a positive corroboration rather than penalizing actions in influencing the choice of the party that might consider cheating. Moreover, a tit-for-tat approaches breed’s mistrust while a mutual declaration approach is more likely to build trust and commitment.
One of the significant implications of the game theory is its ability to view business interactions as cooperating both competition and cooperation. Brandenburger and Nalebuff recognized this duality of competition and cooperation (A. M. Brandenburger, and B. J. Nalebuff, 1996). They recognized four types of players: customers, suppliers, competitors, and complementors. They argued that a player can be competitor as well as a complementor. They explicated this dynamic as follows: A player is your complementor if customers value your product more as they have the other player’s product than when they have your product alone. A player is your competitor if customers value your product less while they have the other player’s product than when they have your product alone. Additionally, customers and suppliers are also players. The same player might have numerous roles, however. According to the authors, the basic model involves the value net, which represents all players and the interdependencies amongst them. Along the vertical aspect of value net are the company’s customers and suppliers. Traditionally, businesses pay more consideration to customers but not enough to suppliers. In current years, however, firms paying attention to suppliers have made development in building business strategic advantages. Along the horizontal line of a value network are competitors and complementors. The former has been studied broadly, but the latter is a new concept for which the authors coined the term. Complementors refer to people who offer complements to the products and services that the firm produces. The most excellent examples of complementors are computer hardware and software, cars and automobile loans, and television and videocassette recorders. We can see that a natural association exists between complementors.
In summary, the game theory distinguishes that competition and cooperation are two necessary paired components to which firms should pay attention. Players can benefit from collaboration yielding win-win outcomes instead of competition resultant in win-lose results. Differing to the traditional view of competition, game theory considers the most advantageous division of the pie between partners for shared gain. We must also distinguish that a player can take up multiple roles. Unlike traditional views, which offer little insight into firms’ choices of whether to struggle or to assist, game theory identifies the role of cooperation together with competition.
The structure of a game-theoretic analysis is as follows: first, the modeler spells out the limits of the game (identities of the players, rules, information available to the players, and the players’ interests). The examination then moves on to “finding solutions.” The effect of applying a solution concept to a game is a set of profiles each of which assigns a strategy to each player. A solution concept is a process which assigns to each game a set of profiles of business strategies fulfilling certain conditions of stability and rationality.
This type of investigation distinguishes between the assumptions entrenched in the solution concept and those underlying the description of the game. The former are regarded as being most solid, though they are in fact very strong assumptions concerning the players’ behavior and procedure of deliberation. Since they are buried in the solution concept, these assumptions are almost never discussed by users of Game Theory. By the way, these assumptions are not all that clear to game theorists either, and it was a long time before game theorists began examining the epistemological postulations underlying the diverse solution concepts.
Thus, whereas applied economists often debate the suppositions of a model they pay almost no attention to the suitability of the assumptions which are incorporated into the solution concept. A solution concept is treated as a machine which receives a state as an input and produces an output which is construed as a prediction of the way in which the game will be determined.
Atkinson R. C., Bower G. H., ; Crothers E. J. An introduction to mathematical learning theory. New York: Wiley, 1965.
Aumann, R. (1987) “What is Game Theory Trying to Accomplish?, ” in K. J. Arrow and S. Honkapohja (eds.), Frontiers of Economics, Oxford: Blackwell
Binmore, Essays on the Foundations of Game Theory, Oxford: Blackwell
Binmore, K. (1990), “Aims and Scope of Game Theory, ” in K.
Borel, É. “La théorie du jeu et les équations intégrales à noyau symétrique”. Comptes Rendus de l’Académie des Sciences, 1921, 173, 1304-1308.
Dixit, A. K. and B. J. Nalebuff (1991) Thinking Strategically, New York: Norton
Edgeworth F. Y. Mathematical psychics. London: Kegan Paul, 1881.
Harsanyi, J. (1973) “Games with Randomly Distributed Payoffs: A New Rationale for Mixed-strategy Equilibrium Points, ” International Journal of Game Theory, 2, 486—502
Henderson, W., T. Dudley-Evans and R. Backhouse (eds.) (1993) Economics and Language, London: Routledge
Lossky A. “International relations in Europe”. In J. S. Bromley (Ed.), The new Cambridge modern history: Volume VI. The rise of Great Britain and Russia. Cambridge: Cambridge University Press, 1970.
Lucas W. F. “An overview of the mathematical theory of games”. Management Science, 1972, 18, P3-PI9.
Luce R. D., & Suppes P. Preference, utility, and subjective probability. In R. D. Luce, R. R. Bush , & E. Galanter (Eds.), Handbook of mathematical psychology, Vol. III. New York: Wiley, 1965.
M. Brandenburger, and B. J. Nalebuff, Co-opetition (New York: Currency Doubleday, 1996).
McCloskey, D. N. (1998) The Rhetoric of Economics (Rhetoric of the Human Sciences), 2nd edn., Madison: University of Wisconsin Press
McMillan, J. (1992) Games, Strategies, and Managers, Oxford: Oxford University Press
Morgestern O. Game theory: Theoretical aspects. In D. L. Sills (Ed.), International encyclopedia of the social sciences, Vol. 6, New York: Macmillan, 1968.
Nash, J. (1950) “The Bargaining Problem, ” Econometrica, 18, 155—62.
R. Gulati, and N. Nohria, “Mutually assured alliances,” (working paper, Academy of Management Best papers Proceedings, Las Vegas, NV: August 9–12, 1992): 18.
R. Johnston, and P. R. Lawrence, Beyond vertical integration—The rise of value added partnership, Harvard Business Review 66 (1988): 94–101.
Radner, R. and R. Rosenthal (1982) “Private Information and Pure Strategy Equilibrium, ” Mathematics of Operation Research, 7, 401—9.
Rapoport An., & Chammah A. Prisoner’s dilemma. Ann Arbor: University of Michigan Press, 1965.
Rubinstein, A., (1991) “Comments on the Interpretation of Game Theory, ” Econometrica, 59, 909—24.
Rubinstein, A., Z. Safra and W. Thomson (1992) “On the Interpretation of the Nash Bargaining Solution, ” Econometrica, 60, 1171—86.
Shubik M. “Game theory and the study of social behavior: An introductory exposition”. In M. Shubik (Ed.), Game theory and related approaches to social behavior. New York: Wiley, 1964.
von J. Zur Neumann “Theorie der Geselschaftsspiele”. Mathematische Annalen, 1928, 100, 295-320.
Y. Doz, G. Hamel, and C. K. Prahalad, Cooperate with your competitors and win, Harvard Business Review 67 (1989): 133–139.
Zermelo E. “Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels”. In E. W. Hobson , ; A. E. H. Love (Eds.), Proceedings of the fifth international congress of mathematicians, Vol. II. Cambridge: Cambridge University Press. 1913.