From: dc@cage.rug.ac.be (Denis Constales) Newsgroups: sci.math Subject: Re: Help Needed on Markov-Chain Problems!!! Date: Thu, 21 Aug 1997 10:36:42 +0200 In article <5tbhmb$57s@nuscc.nus.sg>, sci60055@leonis.nus.sg (Cheong Lip Ghee) wrote: > 1) Which takes fewer flips on average: fliiping until HHT appears or > until HTH appears? Why is there a difference and what r the implications > of this difference? Considering the geneal case, ie n tosses, I speculate > that E(no. of tosses until n heads appear)=E(no. of tosses until n tails > appear)=max(E(all series of n tosses)). Also, min(E(no. of tosses to get > series of n tosses) will occur when no. of H =[n/2]. Is this correct and > if so, how do we prove it? I expected that some specialist would answer these, but since no replies seem to be forthcoming, here's a try at it. Presumably these things have special names etc. in this context, but could not be a---d to look them up in books. :^) HHT: - if H is seen and H fails to come, there's HT, of which no part can belong to a future HHT; - if HH is seen and T fails to come, there's HHH but the last two H's could be the first two of a successful HHT. HTH: - if H is seen and T fails to come, there's HH but the second H could be the first of a successful HTH; - if HT is seen and H fails to come, there's HTT which is useless (no part can belong to a future HTH). Therefore the situations are different (cf. later for the values). > 2)How do u find P(HHT occurs at leat once in 10 tosses)? Is there a > formula for n tosses? You might devise a finite-state automaton that recognises HHT. Example: state 1 (nothing useful seen) H goto 2 T goto 1 state 2 (one H seen) H goto 3 T goto 1 state 3 (HH seen) H goto 2 (<- important, not 1) T goto 4 state 4 (HHT seen) H or T goto 4. The occurrences of H and T both have probability 1/2. Put the probabilities of being in state 1, 2, etc. in a column vector. Since you start in state 1, the original values are (1,0,0,0)^T (^T is matrix transpose, it's a column vector). Then the action of flipping the coin amounts to multiplying to the left by the matrix [1/2 1/2 0 0] [ ] [1/2 0 1/2 0] [ ] [ 0 1/2 0 0] [ ] [ 0 0 1/2 1] (each column vector defines a state). P(HHT occurs at least once in 10 tosses) can then be obtained by taking the 10th power of this matrix and multiplying it with (1,0,0,0)^T: [197 ] [----] [1024] [ ] [155 ] [----] [1024] [ ] [ 89 ] [----] [1024] [ ] [583 ] [----] [1024] so the probabilty of HHT occurring is that of state 4 being reached, is 583/1024. General formula for n tosses: n-th power, replace 10 by n. Matrix powers are easier in Jordan normal form. (Take care to handle the 1's to the side of the diagonal properly.) For HTH the finite-state automaton is: state 1 (nothing useful seen) H goto 2 T goto 1 state 2 (one H seen) H goto 2 (<- important not 1) T goto 3 state 3 (HT seen) H goto 4 T goto 1 (start all over) state 4 (HTH seen) H or T goto 4. The matrix is [1/2 0 1/2 0] [ ] [1/2 1/2 0 0] [ ] [ 0 1/2 0 0] [ ] [ 0 0 1/2 1] and its tenth power multiplied by (1,0,0,0)^T is [57 ] [--- ] [512 ] [ ] [151 ] [----] [1024] [ ] [43 ] [--- ] [512 ] [ ] [673 ] [----] [1024] so the probability is 673/1024 and therefore larger (HTH is easier than HHT). To get the average number of flips: if p0=0, p1, etc. are the successive values of the state 4 probability, the average number of flips is p1-p0 + 2(p2-p1) + 3(p3-p2) + ... Now the p_i are obtained by powers of a matrix. If we put this in Jordan normal form, the problem amounts to summing things like v - 1 + 2(v^2 - v) + 3(v^3 - v^2) + ... for the eigenvalues v. (When it's a diagonal Jordan normal form. Otherwise it's slightly more complicated.) This series is 0 if v=1, and otherwise if |v|<1, -1 -v - v^2 - ... = 1/(v-1). In practice, you could Jordan normal form A, subtract the unit matrix, put ones where there are zeroes in the main diagonal, invert, and replace all the ones by zeroes on the diagonal, then multiply by the Jordan similarity matrices etc. (This won't work if there's 1's to the side of an eigenvalue 1 block in A, but that won't occur for this type of problems: there's only a 1-dimenstional eigenspace for v=1, corresponding to the final state.) There's a shortcut to these computations, though: subtract the unit matrix, add ones to the lowest row, invert, and subtract ones from the lowest row. Then the expected number of flips is in the lower left corner. For HHT the result is [-6 -4 -2 0] [ ] [-4 -4 -2 0] [ ] [-2 -2 -2 0] [ ] [12 10 6 0] so the average number of flips is 12; for HTH the result is [-4 -2 -2 0] [ ] [-4 -4 -2 0] [ ] [-2 -2 -2 0] [ ] [10 8 6 0] and the average number of flips is 10. > 3)How do u find E(No. of HHT appearing in 10 tosses)? Can this value be a > non-integer and if so, what does this mean? In state 3, let the automaton go to state 4 *and* to state 1 when a T is seen, then the expectation will accumulate nicely in the fourth component. (This method assumes that the tail of a match is not allowed to be part of a subsequent match; for HTH, e.g, that HTHTH is only one HTH, not two; if you want two to be counted, state 3 would co-jump to state 2 instead of state 1.) The new matrix for HHT is [1/2 1/2 1/2 0] [ ] [1/2 0 1/2 0] [ ] [ 0 1/2 0 0] [ ] [ 0 0 1/2 1] and for 10 tosses the product is [1/2 ] [ ] [341 ] [----] [1024] [ ] [171 ] [----] [1024] [ ] [711 ] [----] [1024] so the E() is 711/1024. This is a non-integer because it is the average of integer values 0, 1, 2, ... The Jordan normal form of the matrix here is [-1/2 0 0 0] [ ] [ 0 0 0 0] [ ] [ 0 0 1 1] [ ] [ 0 0 0 1] so the (3,4) component is n at the nth power and in the long run this will (work out the similarity) influence the average with a factor 1/12; i.e., in the long run the expectation of HHT's present is 1/12 of the number of tosses. How to determine these automata? there's an... automatic method involving non-deterministic automata and the subsequent transformation in a deterministic one, cf. books on regular expression matching and compiler design. -- Dr. Denis Constales - dcons@world.std.com - http://cage.rug.ac.be/~dc/