Fast Computers and Games
While waiting for your latest computer program to complete some task, you may have wondered whether it has to take so long - maybe a better programmer would be able to write a faster program. On this page we are going to see that some tasks require a very long time whoever writes the program. One example is that finding which player has a winning strategy from a given position in a two player game. We shall now invent a (rather contrived) game to investigate this.
As in other problems in complexity theory, there is a number N representing the size of the game. The game consists of a board with divided into different groups. On their turn each player alters certain squares by placing or removing a pebble from it. One group of N are called the P-squares. These are set at the beginning of the game and not changed. Another group of N are called the A-squares. These are initially empty and are set by player 2 on his turn. A group of 2N called the S-squares together with a group 16 called the W-squares are set up in a certain way at the start of the game and are changed by player 1 on his turn. Finally there is a group of N called the C-squares. Each of the C-squares contains a pebble at the start of the game, and these are used to count down in binary notation after each pair of turns, so that the game lasts for 2N-1 steps. Player 1 has to try to empty all of his squares (the S and W groups) by the end of the game, following a set of rules given below. Player 2 has to try to stop him from doing this.
So what are the rules constraining player 1? Well we imagine a computer processor, which has one area for the program and input data and another for working memory and output data (initially empty). We will assume the processor has fairly simple instruction codes dealing with the working memory one word (16 bits) at a time. The processor has some internal registers, with a fixed amount of storage together with a program counter and an N bit index register for addressing the memory - so it can access up to 2N words of memory. In total, say the processor has 2N bits of internal storage. Then the P-squares represent the program and input data, the S-squares represent the internal state of the processor and the W-squares represent the word of memory at the address given by the A-squares (Note that the whole of the memory is not represented in the game, just one byte of it at a time, selected by player 2). The C-squares represent the step reached in the program, so the progress of the game represents stepping backwards through the program. Then player 1 is required to set the S and W squares to be consistent with the operation of the processor – that is given a state for step T+1, player 1 must then invent a state for step T which could lead to the given state on the next step.
Now this is a general purpose processor, so if we are given a program prog to decide something – that is which takes some data and returns a result true or false. Suppose that the result of the program (1 if true, 0 if false) is placed into the first word of working memory, and when the processor has completed processing it enters a 'finished' state. This can be represented as a game by setting the P-squares to represent this program, and the S squares to represent the finished state, and the W-squares to contain the value representing true. Call this GAME(prog,data). If we are given a game constructed in this way then we can work out the steps that the processor will go through and see if it returns the result true within2N-1 steps. If it does then player 1 can win by just following the progress of the program backwards, and setting the W-squares appropriately for the word of memory pointed to by the A-squares. Now we look at the case that the program either gives the result false or else doesn't finish within 2N-1 steps. Then the initial state of the game is not consistent with the running of the processor. You should be able to convince yourself that if this is the case at step T of the program and player 2 chooses the A-squares wisely then player 1 will not be able to make the state of the board consistent with the program when he goes on to step T-1. Thus player 2 has a winning position.
You have seen that we can work out who has a winning position, but it will take 2N steps - running the program and seeing what happens. Is there any way to do better? Suppose someone claims to have a better way of finding out whether player 1 has a winning strategy. Call this algorithm
ONEWINS. Then use this to write a procedure
IF ONEWINS(GAME(PROG,PROG)) then loop forever else return true
G(G)- does it enter the infinite loop? If it does then it must be because
ONEWINS(GAME(G,G))is true, meaning that player 1 has a winning position in
GAME(G,G)which means that
TRUEwithin 2N steps - contradicting our assumption. Hence
G(G)terminates successfully, so player 1 does not have a winning position, and so
G(G)does not terminate within 2N steps. Thus we see that
G(G)terminates but takes longer than 2N steps. Since the bulk of the computation of
G(G)is the function
ONEWINS(GAME(G,G))we see that we have a game of size N which takes about 2^N steps for
ONEWINSto analyse. Thus we see that for any algorithm which analyses who has the winning position in a game, we can invent a game which it takes exponential time to analyse. This game may not seem much like those which you are used to, but this game can be related to the following game: A and B each have a Boolean formula involving variables
x1, x2... xNEach player then takes turns in setting the value of one variable, with the aim of making his formula true and his opponent's formula false. This can be related to the game above, so that to calculate which player has a winning position takes time exponential in N. Still doesn't sound like any games which you've played recently? Well by representing the Boolean functions as sub-positions in games such as Chess, Go or Draughts (Checkers) on an N×N board, it is possible to show that these games also require exponential time to find which player has the winning position.
|Alternation, A. K. Chandra, D. C. Kozen, & L. J. Stockmeyer||J. Assoc. Comput. Mach. 28 (1981), 114-133|
|Provably difficult combinatorial games L. J. Stockmeyer & A. K. Chandra||SIAM J. Computing 8 (1979), 151-174|
|A similar demonstration to this one can be found at: Exponentially Hard Games|
|For more about NP-complete problems see the page Thoughts on P vs NP|
At this point you may think that it would be easier to stick to solitaire or minesweeper, or even do a jigsaw. However, it has been shown that these problems are NP-complete. It is thought that any algorithm to solve an NP complete problem must take up to exponential time (as we have shown for the game defined above), or at least more than polynomial time. However, nobody knows for sure, in fact there is a prize of one million dollars for anybody who can prove whether or not there is an algorithm to solve NP complete problems in polynomial time.