Chapter 2:Algorithm Analysis
2.1
2/N  ,37,√  N  ,N  ,N  log  log  N  ,N  log  N  ,N  log  (N  2),N  log 2N  ,N  1.5,N  2,N  2log  N  ,N  3,2N  / 2,2N  .N  log  N  and N  log  (N  2)grow at the same rate.2.2(a)True.
数据结构与算法c++版 pdf
(b)False.A counterexample is T  1(N  ) = 2N  ,T  2(N  ) = N  ,and  f  (N  ) = N  .(c)False.A counterexample is T  1(N  ) = N  2,T  2(N  ) = N  ,and  f  (N  ) = N  2.(d)False.The same counterexample as in part (c)applies.
2.3We claim that N  log  N  is the slower growing function.To see this,suppose otherwise.Then,N  ε/ √    log  N  would grow slower than log  N  .Taking logs of both sides,we find that,
under this assumption,ε/ √      log  N  log  N  grows slower than log  log  N  .But the first expres-sion simplifies to ε√      log
N  .If L  = log  N  ,then we are claiming that ε√  L  grows slower than log  L  ,or equivalently,that ε2L  grows slower than log 2 L  .But we know that log 2 L  =  ο (L  ),so the original assumption is false,proving the claim.2.4
Clearly,log k  1N  = ο(log k  2N  )if k  1 < k  2,so we need to worry only about positive integers.The claim is clearly true for k  = 0and k  = 1.Suppose it is true for k  < i  .Then,by L’Hospital’s rule,
N  →∞lim N log i  N ______ = N  →∞
lim  i N log i  −1N _______The second limit is zero by the inductive hypothesis,proving the claim.
2.5
Let  f  (N  ) = 1when N  is even,and N  when N  is odd.Likewise,let g  (N  ) = 1when N  is odd,and N  when N  is even.Then the ratio  f  (N  ) / g  (N  )oscillates between 0and ∞.2.6For all these programs,the following analysis will agree with a simulation:(I)The running time is O  (N  ).
(II)The running time is O  (N  2).
(III)The running time is O  (N  3).
(IV)The running time is O  (N  2).
(V) j  can be as large as i  2,which could be as large as N  2.k  can be as large as  j  ,which is N  2.The
running time is thus proportional to N  .N  2.N  2,which is O  (N  5).(VI)The if  statement is executed at most N  3times,by previous arguments,but it is true only O  (N  2)times (because it is true exactly i  times for each i  ).Thus the innermost loop is only executed O  (N  2)times.Each time through,it takes O  ( j  2) = O  (N  2)time,for a total of O  (N  4).This is an example where multiplying loop sizes can occasionally give an overesti-mate.
2.7(a)It should be clear that all algorithms generate only legal permutations.The first two algorithms have tests to guarantee no duplicates;the third algorithm works by shuffling an array that initially has no duplicates,so none can occur.It is also clear that the first two algorithms are completely random,and that each permutation is equally likely.The third algorithm,due to R.Floyd,is not as obvious;the correctness can be proved by induction.
See
J.Bentley,"Programming Pearls,"Communications of the ACM 30(1987),754-757.
Note that if the second line of algorithm 3is replaced with the statement
Swap(A[i],A[RandInt(0,N-1)]);
then not all permutations are equally likely.To see this,notice that for N  = 3,there are 27equally likely ways of performing the three swaps,depending on the three random integers.Since there are only 6permutations,and 6does not evenly divide
27,each permutation cannot possibly be equally represented.
(b)For the first algorithm,the time to decide if a random number to be placed in A  [i  ]has not been used earlier is O  (i  ).The expected number of random numbers that need to be tried is N  / (N  − i  ).This is obtained as follows:i  of the N  numbers would be duplicates.Thus the probability of success is (N  − i  ) / N  .Thus the expected number of independent trials is N  / (N  − i  ).The time bound is thus
i  =0ΣN  −1N  −i Ni ____ < i  =0ΣN  −1N  −i N  2____ < N  2i  =0ΣN  −1N  −i 1____ < N  2j  =1
ΣN  j 1__ = O  (N  2log  N  )The second algorithm saves a factor of i  for each random number,and thus reduces the time bound to O  (N  log  N  )on average.The third algorithm is clearly linear.
(c,d)The running times should agree with the preceding analysis if the machine has enough memory.If not,the third algorithm will not seem linear because of a drastic increase for large N  .
(e)The worst-case running time of algorithms I and II cannot be bounded because there is always a fini
te probability that the program will not terminate by some given time T  .The algorithm does,however,terminate with probability 1.The worst-case running time of the third algorithm is linear -its running time does not depend on the sequence of random numbers.
2.8Algorithm 1would take about 5days for N  = 10,000,14.2years for N  = 100,000and 140
centuries for N  = 1,000,000.Algorithm 2would take about 3hours for N  = 100,000and about 2weeks for N  = 1,000,000.Algorithm 3would use 1⁄12minutes for N  = 1,000,000.These calculations assume a machine with enough memory to hold the array.Algorithm 4solves a problem of size 1,000,000in 3seconds.
2.9
(a)O  (N  2).
(b)O  (N  log  N  ).
2.10(c)The algorithm is linear.
2.11Use a variation of binary search to get an O  (log  N  )solution (assuming the array is preread).
2.13(a)Test to see if N  is an odd number (or 2)and is not divisible by 3,5,7,...,√
N  .(b)O  (√  N  ),assuming that all divisions count for one unit of time.(c)B  = O  (log  N  ).
(d)O  (2B  / 2).
(e)If a 20-bit number can be tested in time T  ,then a 40-bit number would require about T  2time.
(f)B  is the better measure because it more accurately represents the size  of the input.
2.14The running time is proportional to N times the sum of the reciprocals of the primes less
than N .This is O (N log log N ).See Knuth,Volume2,page394.
2.15Compute X 2,X 4,X 8,X 10,X 20,X 40,X 60,and X 62.
2.16Maintain an array PowersOfX that can befilled in a for loop.The array will contain X ,X 2,
X 4,up to X 2  log N  .The binary representation of N (which can be obtained by testing even or odd and then dividing by2,until all bits are examined)can be used to multiply the appropriate entries of the array.
2.17For N =0or N =1,the number of multiplies is zero.If b (N )is the number of ones in the
binary representation of N ,then if N >1,the number of multiplies used is
log N  + b (N )−1
2.18(a)A .
(b)B .
(c)The information given is not sufficient to determine an answer.We have only worst-
case bounds.
(d)Yes.
2.19(a)Recursion is unnecessary if there are two or fewer elements.
(b)One way to do this is to note that if thefirst N −1elements have a majority,then the last
element cannot change this.Otherwise,the last element could be a majority.Thus if N is odd,ignore the l
ast element.Run the algorithm as before.If no majority element emerges, then return the N th element as a candidate.
(c)The running time is O (N ),and satisfies T (N )= T (N / 2)+ O (N ).
(d)One copy of the original needs to be saved.After this,the B array,and indeed the recur-
sion can be avoided by placing each B i in the A array.The difference is that the original recursive strategy implies that O (log N )arrays are used;this guarantees only two copies. 2.20Otherwise,we could perform operations in parallel by cleverly encoding several integers
into one.For instance,if A=001,B=101,C=111,D=100,we could add A and B at the same time as C and D by adding00A00C+00B00D.We could extend this to add N pairs of numbers at once in unit cost.
2.22No.If Low =1,High =2,then Mid =1,and the recursive call does not make progress. 2.24No.As in Exercise2.22,no progress is made.

版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系QQ:729038198,我们将在24小时内删除。