Toán học - Chapter 3: Algorithms

Definition: The general searching problem is to locate an element x in the list of distinct elements a1,a2,.,an, or determine that it is not in the list. The solution to a searching problem is the location of the term in the list that equals x (that is, i is the solution if x = ai) or 0 if x is not in the list. For example, a library might want to check to see if a patron is on a list of those with overdue books before allowing him/her to checkout another book. We will study two different searching algorithms; linear search and binary search.

pptx81 trang | Chia sẻ: huyhoang44 | Lượt xem: 556 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Toán học - Chapter 3: Algorithms, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
AlgorithmsChapter 3With Question/Answer AnimationsCopyright © McGraw-Hill Education. All rights reserved. No reproduction or distribution without the prior written consent of McGraw-Hill Education.Chapter SummaryAlgorithms Example Algorithms Algorithmic ParadigmsGrowth of FunctionsBig-O and other NotationComplexity of AlgorithmsAlgorithmsSection 3.1Section SummaryProperties of AlgorithmsAlgorithms for Searching and SortingGreedy AlgorithmsHalting ProblemProblems and AlgorithmsIn many domains there are key general problems that ask for output with specific properties when given valid input.The first step is to precisely state the problem, using the appropriate structures to specify the input and the desired output.We then solve the general problem by specifying the steps of a procedure that takes a valid input and produces the desired output. This procedure is called an algorithm. Algorithms Definition: An algorithm is a finite set of precise instructions for performing a computation or for solving a problem. Example: Describe an algorithm for finding the maximum value in a finite sequence of integers. Solution: Perform the following steps:Set the temporary maximum equal to the first integer in the sequence.Compare the next integer in the sequence to the temporary maximum.If it is larger than the temporary maximum, set the temporary maximum equal to this integer.Repeat the previous step if there are more integers. If not, stop.When the algorithm terminates, the temporary maximum is the largest integer in the sequence.Abu Ja’far Mohammed Ibin Musa Al-Khowarizmi(780-850)Specifying AlgorithmsAlgorithms can be specified in different ways. Their steps can be described in English or in pseudocode.Pseudocode is an intermediate step between an English language description of the steps and a coding of these steps using a programming language. The form of pseudocode we use is specified in Appendix 3. It uses some of the structures found in popular languages such as C++ and Java.Programmers can use the description of an algorithm in pseudocode to construct a program in a particular language. Pseudocode helps us analyze the time required to solve a problem using an algorithm, independent of the actual programming language used to implement algorithm. Properties of AlgorithmsInput: An algorithm has input values from a specified set.Output: From the input values, the algorithm produces the output values from a specified set. The output values are the solution.Correctness: An algorithm should produce the correct output values for each set of input values.Finiteness: An algorithm should produce the output after a finite number of steps for any input.Effectiveness: It must be possible to perform each step of the algorithm correctly and in a finite amount of time.Generality: The algorithm should work for all problems of the desired form.Finding the Maximum Element in a Finite SequenceThe algorithm in pseudocode:Does this algorithm have all the properties listed on the previous slide? procedure max(a1, a2, ., an: integers) max := a1 for i := 2 to n if max am then i := m + 1 else j := m if x = ai then location := i else location := 0 return location{location is the subscript i of the term ai equal to x, or 0 if x is not found} Binary Search Example: The steps taken by a binary search for 19 in the list: 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22 The list has 16 elements, so the midpoint is 8. The value in the 8th position is 10. Since 19 > 10, further search is restricted to positions 9 through 16. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22The midpoint of the list (positions 9 through 16) is now the 12th position with a value of 16. Since 19 > 16, further search is restricted to the 13th position and above. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22The midpoint of the current list is now the 14th position with a value of 19. Since 19 ≯ 19, further search is restricted to the portion from the 13th through the 14th positions . 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22The midpoint of the current list is now the 13th position with a value of 18. Since 19> 18, search is restricted to the portion from the 14th position through the 14th. 1 2 3 5 6 7 8 10 12 13 15 16 18 19 20 22Now the list has a single element and the loop ends. Since 19=19, the location 14 is returned.SortingTo sort the elements of a list is to put them in increasing order (numerical order, alphabetic, and so on).Sorting is an important problem because:A nontrivial percentage of all computing resources are devoted to sorting different kinds of lists, especially applications involving large databases of information that need to be presented in a particular order (e.g., by customer, part number etc.).An amazing number of fundamentally different algorithms have been invented for sorting. Their relative advantages and disadvantages have been studied extensively.Sorting algorithms are useful to illustrate the basic notions of computer science.A variety of sorting algorithms are studied in this book; binary, insertion, bubble, selection, merge, quick, and tournament.In Section 3.3, we’ll study the amount of time required to sort a list using the sorting algorithms covered in this section.Bubble SortBubble sort makes multiple passes through a list. Every pair of elements that are found to be out of order are interchanged.procedure bubblesort(a1,,an: real numbers with n ≥ 2) for i := 1 to n− 1 for j := 1 to n − i if aj >aj+1 then interchange aj and aj+1{a1,, an is now in increasing order}Bubble Sort Example: Show the steps of bubble sort with 3 2 4 1 5At the first pass the largest element has been put into the correct positionAt the end of the second pass, the 2nd largest element has been put into the correct position.In each subsequent pass, an additional element is put in the correct position.Insertion SortInsertion sort begins with the 2nd element. It compares the 2nd element with the 1st and puts it before the first if it is not larger.procedure insertion sort (a1,,an: real numbers with n ≥ 2) for j := 2 to n i := 1 while aj > ai i := i + 1 m := aj for k := 0 to j − i − 1 aj-k := aj-k-1 ai := m{Now a1,,an is in increasing order} Next the 3rd element is put into the correct position among the first 3 elements. In each subsequent pass, the n+1st element is put into its correct position among the first n+1 elements.Linear search is used to find the correct position.Insertion Sort Example: Show all the steps of insertion sort with the input: 3 2 4 1 52 3 4 1 5 (first two positions are interchanged)2 3 4 1 5 (third element remains in its position) 1 2 3 4 5 (fourth is placed at beginning)1 2 3 4 5 (fifth element remains in its position)Greedy AlgorithmsOptimization problems minimize or maximize some parameter over all possible inputs.Among the many optimization problems we will study are:Finding a route between two cities with the smallest total mileage.Determining how to encode messages using the fewest possible bits.Finding the fiber links between network nodes using the least amount of fiber.Optimization problems can often be solved using a greedy algorithm, which makes the “best” choice at each step. Making the “best choice” at each step does not necessarily produce an optimal solution to the overall problem, but in many instances, it does. After specifying what the “best choice” at each step is, we try to prove that this approach always produces an optimal solution, or find a counterexample to show that it does not.The greedy approach to solving problems is an example of an algorithmic paradigm, which is a general approach for designing an algorithm. We return to algorithmic paradigms in Section 3.3. Greedy Algorithms: Making Change Example: Design a greedy algorithm for making change (in U.S. money) of n cents with the following coins: quarters (25 cents), dimes (10 cents), nickels (5 cents), and pennies (1 cent) , using the least total number of coins. Idea: At each step choose the coin with the largest possible value that does not exceed the amount of change left.If n = 67 cents, first choose a quarter leaving 67−25 = 42 cents. Then choose another quarter leaving 42 −25 = 17 centsThen choose 1 dime, leaving 17 − 10 = 7 cents.Choose 1 nickel, leaving 7 – 5 = 2 cents.Choose a penny, leaving one cent. Choose another penny leaving 0 cents.Greedy Change-Making Algorithm Solution: Greedy change-making algorithm for n cents. The algorithm works with any coin denominations c1, c2, ,cr .For the example of U.S. currency, we may have quarters, dimes, nickels and pennies, with c1 = 25, c2 = 10, c3 = 5, and c4 = 1.procedure change(c1, c2, , cr: values of coins, where c1> c2> > cr ; n: a positive integer)for i := 1 to r di := 0 [di counts the coins of denomination ci] while n ≥ ci di := di + 1 [add a coin of denomination ci] n = n - ci [di counts the coins ci] Proving Optimality for U.S. CoinsShow that the change making algorithm for U.S. coins is optimal. Lemma 1: If n is a positive integer, then n cents in change using quarters, dimes, nickels, and pennies, using the fewest coins possible has at most 2 dimes, 1 nickel, 4 pennies, and cannot have 2 dimes and a nickel. The total amount of change in dimes, nickels, and pennies must not exceed 24 cents. Proof: By contradictionIf we had 3 dimes, we could replace them with a quarter and a nickel. If we had 2 nickels, we could replace them with 1 dime.If we had 5 pennies, we could replace them with a nickel.If we had 2 dimes and 1 nickel, we could replace them with a quarter.The allowable combinations, have a maximum value of 24 cents; 2 dimes and 4 pennies. Proving Optimality for U.S. Coins Theorem: The greedy change-making algorithm for U.S. coins produces change using the fewest coins possible. Proof: By contradiction.Assume there is a positive integer n such that change can be made for n cents using quarters, dimes, nickels, and pennies, with a fewer total number of coins than given by the algorithm.Then, q̍ ≤ q where q̍ is the number of quarters used in this optimal way and q is the number of quarters in the greedy algorithm’s solution. But this is not possible by Lemma 1, since the value of the coins other than quarters can not be greater than 24 cents.Similarly, by Lemma 1, the two algorithms must have the same number of dimes, nickels, and quarters.Greedy Change-Making Algorithm Optimality depends on the denominations available.For U.S. coins, optimality still holds if we add half dollar coins (50 cents) and dollar coins (100 cents).But if we allow only quarters (25 cents), dimes (10 cents), and pennies (1 cent), the algorithm no longer produces the minimum number of coins.Consider the example of 31 cents. The optimal number of coins is 4, i.e., 3 dimes and 1 penny. What does the algorithm output?Greedy Scheduling Example: We have a group of proposed talks with start and end times. Construct a greedy algorithm to schedule as many as possible in a lecture hall, under the following assumptions:When a talk starts, it continues till the end.No two talks can occur at the same time.A talk can begin at the same time that another ends.Once we have selected some of the talks, we cannot add a talk which is incompatible with those already selected because it overlaps at least one of these previously selected talks.How should we make the “best choice” at each step of the algorithm? That is, which talk do we pick ?The talk that starts earliest among those compatible with already chosen talks?The talk that is shortest among those already compatible?The talk that ends earliest among those compatible with already chosen talks?Greedy SchedulingPicking the shortest talk doesn’t work.Can you find a counterexample here?But picking the one that ends soonest does work. The algorithm is specified on the next page. Talk 2Start: 9:00 AMEnd: 10:00 AM Talk 1Start: 8:00 AMEnd :9:45 AMTalk 3End: 11:00 AMStart: 9:45 AMGreedy Scheduling algorithm Solution: At each step, choose the talks with the earliest ending time among the talks compatible with those selected.Will be proven correct by induction in Chapter 5.procedure schedule(s1 ≤ s2 ≤ ≤ sn : start times, e1 ≤ e2 ≤ ≤ en : end times)sort talks by finish time and reorder so that e1 ≤ e2 ≤ ≤ en S := ∅for j := 1 to n if talk j is compatible with S then S := S ∅∪{talk j}return S [ S is the set of talks scheduled] Halting Problem Example: Can we develop a procedure that takes as input a computer program along with its input and determines whether the program will eventually halt with that input.Solution: Proof by contradiction.Assume that there is such a procedure and call it H(P,I). The procedure H(P,I) takes as input a program P and the input I to P. H outputs “halt” if it is the case that P will stop when run with input I. Otherwise, H outputs “loops forever.”Halting ProblemSince a program is a string of characters, we can call H(P,P). Construct a procedure K(P), which works as follows. If H(P,P) outputs “loops forever” then K(P) halts.If H(P,P) outputs “halt” then K(P) goes into an infinite loop printing “ha” on each iteration.Halting ProblemNow we call K with K as input, i.e. K(K).If the output of H(K,K) is “loops forever” then K(K) halts. A Contradiction.If the output of H(K,K) is “halts” then K(K) loops forever. A Contradiction.Therefore, there can not be a procedure that can decide whether or not an arbitrary program halts. The halting problem is unsolvable. The Growth of FunctionsSection 3.2Section SummaryBig-O NotationBig-O Estimates for Important FunctionsBig-Omega and Big-Theta NotationEdmund Landau(1877-1938)Paul Gustav Heinrich Bachmann(1837-1920)Donald E. Knuth(Born 1938)The Growth of FunctionsIn both computer science and in mathematics, there are many times when we care about how fast a function grows.In computer science, we want to understand how quickly an algorithm can solve a problem as the size of the input grows. We can compare the efficiency of two different algorithms for solving the same problem. We can also determine whether it is practical to use a particular algorithm as the input grows. We’ll study these questions in Section 3.3.Two of the areas of mathematics where questions about the growth of functions are studied are:number theory (covered in Chapter 4) combinatorics (covered in Chapters 6 and 8)Big-O Notation Definition: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. We say that f(x) is O(g(x)) if there are constants C and k such that whenever x > k. (illustration on next slide)This is read as “f(x) is big-O of g(x)” or “g asymptotically dominates f.”The constants C and k are called witnesses to the relationship f(x) is O(g(x)). Only one pair of witnesses is needed. Illustration of Big-O Notationf(x) is O(g(x)Some Important Points about Big-O NotationIf one pair of witnesses is found, then there are infinitely many pairs. We can always make the k or the C larger and still maintain the inequality . Any pair C ̍ and k̍ where C k̍ > k.You may see “ f(x) = O(g(x))” instead of “ f(x) is O(g(x)).” But this is an abuse of the equals sign since the meaning is that there is an inequality relating the values of f and g, for sufficiently large values of x. It is ok to write f(x) ∊ O(g(x)), because O(g(x)) represents the set of functions that are O(g(x)).Usually, we will drop the absolute value sign since we will always deal with functions that take on positive values. Using the Definition of Big-O Notation Example: Show that is . Solution: Since when x > 1, x 2, we have 2x ≤ x2 and 1 2. Can take C = 3 and k = 2 as witnesses instead. Illustration of Big-O Notation is Big-O NotationBoth and are such that and . We say that the two functions are of the same order. (More on this later)If and h(x) is larger than g(x) for all positive real numbers, then . Note that if for x > k and if for all x, then if x > k. Hence, . For many applications, the goal is to select the function g(x) in O(g(x)) as small as possible (up to multiplication by a constant, of course).Using the Definition of Big-O Notation Example: Show that 7x2 is O(x3). Solution: When x > 7, 7x2 k. Then (by dividing both sides of n2 ≤ Cn) by n, then n ≤ C must hold for all n > k. A contradiction!Big-O Estimates for PolynomialsExample: Let where are real numbers with an ≠0. Then f(x) is O(xn). Proof: |f(x)| = |anxn + an-1 xn-1 + ∙∙∙ + a1x1 + a0| ≤ |an|xn + |an-1| xn-1 + ∙∙∙ + |a1|x1 + |a0| = xn (|an| + |an-1| /x + ∙∙∙ + |a1|/xn-1 + |a0|/ xn) ≤ xn (|an| + |an-1| + ∙∙∙ + |a1|+ |a0|)Take C = |an| + |an-1| + ∙∙∙ + |a0| and k = 1. Then f(x) is O(xn). The leading term anxn of a polynomial dominates its growth. Uses triangle inequality, an exercise in Section 1.8. Assuming x > 1Big-O Estimates for some Important Functions Example: Use big-O notation to estimate the sum of the first n positive integers. Solution: Example: Use big-O notation to estimate the factorial function Solution:Continued →Big-O Estimates for some Important Functions Example: Use big-O notation to estimate log n! Solution: Given that (previous slide) then . Hence, log(n!) is O(n∙log(n)) taking C = 1 and k = 1. Display of Growth of FunctionsNote the difference in behavior of functions as n gets largerUseful Big-O Estimates Involving Logarithms, Powers, and ExponentsIf d > c > 1, then nc is O(nd), but nd is not O(nc). If b > 1 and c and d are positive, then (logb n)c is O(nd), but nd is not O((logb n)c). If b > 1 and d is positive, then nd is O(bn), but bn is not O(nd). If c > b > 1, then bn is O(cn), but cn is not O(bn).Combinations of FunctionsIf f1 (x) is O(g1(x)) and f2 (x) is O(g2(x)) then ( f1 + f2 )(x) is O(max(|g1(x) |,|g2(x) |)). See next slide for proof If f1 (x) and f2 (x) are both O(g(x)) then ( f1 + f2 )(x) is O(g(x)). See text for argument If f1 (x) is O(g1(x)) and f2 (x) is O(g2(x)) then ( f1 f2 )(x) is O(g1(x)g2(x)). See text for argument Combinations of FunctionsIf f1 (x) is O(g1(x)) and f2 (x) is O(g2(x)) then ( f1 + f2 )(x) is O(max(|g1(x) |,|g2(x) |)). By the definition of big-O notation, there are constants C1,C2 ,k1,k2 such that | f1 (x) ≤ C1|g1(x) | when x > k1 and f2 (x) ≤ C2|g2(x) | when x > k2 . |( f1 + f2 )(x)| = |f1(x) + f2(x)| ≤ |f1 (x)| + |f2 (x)| by the triangle inequality |a + b| ≤ |a| + |b| |f1 (x)| + |f2 (x)| ≤ C1|g1(x) | + C2|g2(x) | ≤ C1|g(x) | + C2|g(x) | where g(x) = max(|g1(x)|,|g2(x)|) = (C1 + C2) |g(x)| = C|g(x)| where C = C1 + C2 Therefore |( f1 + f2 )(x)| ≤ C|g(x)| whenever x > k, where k = max(k1,k2). Ordering Functions by Order of GrowthPut the functions below in order so that each function is big-O of the next function on the list.f1(n) = (1.5)nf2(n) = 8n3+17n2 +111f3(n) = (log n )2f4(n) = 2nf5(n) = log (log n)f6(n) = n2 (log n)3f7(n) = 2n (n2 +1)f8(n) = n3+ n(log n)2 f9(n) = 10000f10(n) = n!We solve this exercise by successively finding the function that grows slowest among all those left on the list. f9(n) = 10000 (constant, does not increase with n)f5(n) = log (log n) (grows slowest of all the others)f3(n) = (log n )2 (grows next slowest)f6(n) = n2 (log n)3 (next largest, (log n)3 factor smaller than any power of n)f2(n) = 8n3+17n2 +111 (tied with the one below)f8(n) = n3+ n(log n)2 (tied with the one above)f1(n) = (1.5)n (next largest, an exponential function)f4(n) = 2n (grows faster than one above since 2 > 1.5)f7(n) = 2n (n2 +1) (grows faster than above because of the n2 +1 factor)f10(n) = n! ( n! grows faster than cn for every c)Big-Omega Notation Definition: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. We say that if there are constants C and k such that when x > k. We say that “f(x) is big-Omega of g(x).”Big-O gives an upper bound on the growth of a function, while Big-Omega gives a lower bound. Big-Omega tells us that a function grows at least as fast as another.f(x) is Ω(g(x)) if and only if g(x) is O(f(x)). This follows from the definitions. See the text for details.Ω is the upper case version of the lower case Greek letter ω.Big-Omega Notation Example: Show that is where . Solution: for all positive real numbers x.Is it also the case that is ? Big-Theta NotationDefinition: Let f and g be functions from the set of integers or the set of real numbers to the set of real numbers. The function if and . We say that “f is big-Theta of g(x)” and also that “f(x) is of order g(x)” and also that “f(x) and g(x) are of the same order.” if and only if there exists constants C1 , C2 and k such that C1g(x) k. This follows from the definitions of big-O and big-Omega. Θ is the upper case version of the lower case Greek letter θ.Big Theta Notation Example: Show that the sum of the first n positive integers is Θ(n2). Solution: Let f(n) = 1 + 2 + ∙∙∙ + n.We have already shown that f(n) is O(n2).To show that f(n) is Ω(n2), we need a positive constant C such that f(n) > Cn2 for sufficiently large n. Summing only the terms greater than n/2 we obtain the inequality 1 + 2 + ∙∙∙ + n ≥ ⌈ n/2⌉ + (⌈ n/2⌉ + 1) + ∙∙∙ + n ≥ ⌈ n/2⌉ + ⌈ n/2⌉ + ∙∙∙ + ⌈ n/2⌉ = (n − ⌈ n/2⌉ + 1 ) ⌈ n/2⌉ ≥ (n/2)(n/2) = n2/4Taking C = ¼, f(n) > Cn2 for all positive integers n. Hence, f(n) is Ω(n2), and we can conclude that f(n) is Θ(n2). Big-Theta Notation Example: Sh0w that f(x) = 3x2 + 8x log x is Θ(x2). Solution: 3x2 + 8x log x ≤ 11x2 for x > 1, since 0 ≤ 8x log x ≤ 8x2 .Hence, 3x2 + 8x log x is O(x2).x2 is clearly O(3x2 + 8x log x)Hence, 3x2 + 8x log x is Θ(x2).Big-Theta NotationWhen it must also be the case thatNote that if and only if it is the case that and . Sometimes writers are careless and write as if big-O notation has the same meaning as big-Theta. Big-Theta Estimates for PolynomialsTheorem: Let where are real numbers with an ≠0. Then f(x) is of order xn (or Θ(xn)).(The proof is an exercise.) Example: The polynomial is order of x5 (or Θ(x5)). The polynomial is order of x199 (or Θ(x199) ). Complexity of AlgorithmsSection 3.3Section SummaryTime ComplexityWorst-Case ComplexityAlgorithmic ParadigmsUnderstanding the Complexity of AlgorithmsThe Complexity of AlgorithmsGiven an algorithm, how efficient is this algorithm for solving a problem given input of a particular size? To answer this question, we ask:How much time does this algorithm use to solve a problem?How much computer memory does this algorithm use to solve a problem?When we analyze the time the algorithm uses to solve the problem given input of a particular size, we are studying the time complexity of the algorithm.When we analyze the computer memory the algorithm uses to solve the problem given input of a particular size, we are studying the space complexity of the algorithm.The Complexity of AlgorithmsIn this course, we focus on time complexity. The space complexity of algorithms is studied in later courses.We will measure time complexity in terms of the number of operations an algorithm uses and we will use big-O and big-Theta notation to estimate the time complexity.We can use this analysis to see whether it is practical to use this algorithm to solve problems with input of a particular size. We can also compare the efficiency of different algorithms for solving the same problem.We ignore implementation details (including the data structures used and both the hardware and software platforms) because it is extremely complicated to consider them.Time ComplexityTo analyze the time complexity of algorithms, we determine the number of operations, such as comparisons and arithmetic operations (addition, multiplication, etc.). We can estimate the time a computer may actually use to solve a problem using the amount of time required to do basic operations. We ignore minor details, such as the “house keeping” aspects of the algorithm.We will focus on the worst-case time complexity of an algorithm. This provides an upper bound on the number of operations an algorithm uses to solve a problem with input of a particular size.It is usually much more difficult to determine the average case time complexity of an algorithm. This is the average number of operations an algorithm uses to solve a problem over all inputs of a particular size.Complexity Analysis of Algorithms Example: Describe the time complexity of the algorithm for finding the maximum element in a finite sequence. procedure max(a1, a2, ., an: integers) max := a1 for i := 2 to n if max n. Exactly 2(n − 1) + 1 = 2n − 1 comparisons are made. Hence, the time complexity of the algorithm is Θ(n).Worst-Case Complexity of Linear Search Example: Determine the time complexity of the linear search algorithm.procedure linear search(x:integer, a1, a2, ,an: distinct integers)i := 1while (i ≤ n and x ≠ ai) i := i + 1if i ≤ n then location := ielse location := 0return location{location is the subscript of the term that equals x, or is 0 if x is not found}Solution: Count the number of comparisons. At each step two comparisons are made; i ≤ n and x ≠ ai . To end the loop, one comparison i ≤ n is made. After the loop, one more i ≤ n comparison is made. If x = ai , 2i + 1 comparisons are used. If x is not on the list, 2n + 1 comparisons are made and then an additional comparison is used to exit the loop. So, in the worst case 2n + 2 comparisons are made. Hence, the complexity is Θ(n).Average-Case Complexity of Linear Search Example: Describe the average case performance of the linear search algorithm. (Although usually it is very difficult to determine average-case complexity, it is easy for linear search.) Solution: Assume the element is in the list and that the possible positions are equally likely. By the argument on the previous slide, if x = ai , the number of comparisons is 2i + 1. Hence, the average-case complexity of linear search is Θ(n). Worst-Case Complexity of Binary Search Example: Describe the time complexity of binary search in terms of the number of comparisons used. procedure binary search(x: integer, a1,a2,, an: increasing integers) i := 1 {i is the left endpoint of interval} j := n {j is right endpoint of interval} while i am then i := m + 1 else j := m if x = ai then location := i else location := 0 return location{location is the subscript i of the term ai equal to x, or 0 if x is not found} Solution: Assume (for simplicity) n = 2k elements. Note that k = log n. Two comparisons are made at each stage; i am . At the first iteration the size of the list is 2k and after the first iteration it is 2k-1. Then 2k-2 and so on until the size of the list is 21 = 2. At the last step, a comparison tells us that the size of the list is the size is 20 = 1 and the element is compared with the single remaining element. Hence, at most 2k + 2 = 2 log n + 2 comparisons are made. Therefore, the time complexity is Θ (log n), better than linear search. Worst-Case Complexity of Bubble Sort Example: What is the worst-case complexity of bubble sort in terms of the number of comparisons made?procedure bubblesort(a1,,an: real numbers with n ≥ 2) for i := 1 to n− 1 for j := 1 to n − i if aj >aj+1 then interchange aj and aj+1{a1,, an is now in increasing order}Solution: A sequence of n−1 passes is made through the list. On each pass n − i comparisons are made.The worst-case complexity of bubble sort is Θ(n2) since . Worst-Case Complexity of Insertion Sort Example: What is the worst-case complexity of insertion sort in terms of the number of comparisons made?procedure insertion sort(a1,,an: real numbers with n ≥ 2) for j := 2 to n i := 1 while aj > ai i := i + 1 m := aj for k := 0 to j − i − 1 aj-k := aj-k-1 ai := m Solution: The total number of comparisons are:Therefore the complexity is Θ(n2).Matrix Multiplication AlgorithmThe definition for matrix multiplication can be expressed as an algorithm; C = A B where C is an m n matrix that is the product of the m k matrix A and the k n matrix B.This algorithm carries out matrix multiplication based on its definition. procedure matrix multiplication(A,B: matrices) for i := 1 to m for j := 1 to n cij := 0 for q := 1 to k cij := cij + aiq bqjreturn C{C = [cij] is the product of A and B}Complexity of Matrix Multiplication Example: How many additions of integers and multiplications of integers are used by the matrix multiplication algorithm to multiply two n n matrices. Solution: There are n2 entries in the product. Finding each entry requires n multiplications and n − 1 additions. Hence, n3 multiplications and n2(n − 1) additions are used. Hence, the complexity of matrix multiplication is O(n3). Boolean Product AlgorithmThe definition of Boolean product of zero-one matrices can also be converted to an algorithm.procedure Boolean product(A,B: zero-one matrices) for i := 1 to m for j := 1 to n cij := 0 for q := 1 to k cij := cij ∨ (aiq ∧ bqj)return C{C = [cij] is the Boolean product of A and B}Complexity of Boolean Product Algorithm Example: How many bit operations are used to find A ⊙ B, where A and B are n n zero-one matrices? Solution: There are n2 entries in the A ⊙ B. A total of n Ors and n ANDs are used to find each entry. Hence, each entry takes 2n bit operations. A total of 2n3 operations are used. Therefore the complexity is O(n3) Matrix-Chain MultiplicationHow should the matrix-chain A1A2∙ ∙ ∙An be computed using the fewest multiplications of integers, where A1 , A2 , ∙ ∙ ∙ , An are m1 m2, m2 m3 , ∙ ∙ ∙ mn mn+1 integer matrices. Matrix multiplication is associative (exercise in Section 2.6). Example: In which order should the integer matrices A1A2A3 - where A1 is 30 20 , A2 20 40, A3 40 10 - be multiplied to use the least number of multiplications. Solution: There are two possible ways to compute A1A2A3.A1(A2A3): A2A3 takes 20 ∙ 40 ∙ 10 = 8000 multiplications. Then multiplying A1 by the 20 10 matrix A2A3 takes 30 ∙ 20 ∙ 10 = 6000 multiplications. So the total number is 8000 + 6000 = 14,000.(A1A2)A3: A1A2 takes 30 ∙ 20 ∙ 40 = 24,000 multiplications. Then multiplying the 30 40 matrix A1A2 by A3 takes 30 ∙ 40 ∙ 10 = 12,000 multiplications. So the total number is 24,000 + 12,000 = 36,000. So the first method is best. An efficient algorithm for finding the best order for matrix-chain multiplication can be based on the algorithmic paradigm known as dynamic programming. (see Ex. 57 in Section 8.1)Algorithmic ParadigmsAn algorithmic paradigm is a a general approach based on a particular concept for constructing algorithms to solve a variety of problems. Greedy algorithms were introduced in Section 3.1.We discuss brute-force algorithms in this section.We will see divide-and-conquer algorithms (Chapter 8), dynamic programming (Chapter 8), backtracking (Chapter 11), and probabilistic algorithms (Chapter 7). There are many other paradigms that you may see in later courses.Brute-Force AlgorithmsA brute-force algorithm is solved in the most straightforward manner, without taking advantage of any ideas that can make the algorithm more efficient.Brute-force algorithms we have previously seen are sequential search, bubble sort, and insertion sort. Computing the Closest Pair of Points by Brute-Force Example: Construct a brute-force algorithm for finding the closest pair of points in a set of n points in the plane and provide a worst-case estimate of the number of arithmetic operations. Solution: Recall that the distance between (xi,yi) and (xj, yj) is . A brute-force algorithm simply computes the distance between all pairs of points and picks the pair with the smallest distance. Continued →Note: There is no need to compute the square root, since the square of the distance between two points is smallest when the distance is smallest. Computing the Closest Pair of Points by Brute-ForceAlgorithm for finding the closest pair in a set of n points.The algorithm loops through n(n −1)/2 pairs of points, computes the value (xj − xi)2 + (yj − yi)2 and compares it with the minimum, etc. So, the algorithm uses Θ(n2) arithmetic and comparison operations.We will develop an algorithm with O(log n) worst-case complexity in Section 8.3.procedure closest pair((x1, y1), (x2, y2), ,(xn, yn): xi, yi real numbers)min = ∞ for i := 1 to n for j := 1 to i if (xj − xi)2 + (yj − yi)2 < min then min := (xj − xi)2 + (yj − yi)2 closest pair := (xi, yi), (xj, yj)return closest pair Understanding the Complexity of AlgorithmsUnderstanding the Complexity of AlgorithmsTimes of more than 10100 years are indicated with an *.Complexity of ProblemsTractable Problem: There exists a polynomial time algorithm to solve this problem. These problems are said to belong to the Class P.Intractable Problem: There does not exist a polynomial time algorithm to solve this problemUnsolvable Problem : No algorithm exists to solve this problem, e.g., halting problem.Class NP: Solution can be checked in polynomial time. But no polynomial time algorithm has been found for finding a solution to problems in this class. NP Complete Class: If you find a polynomial time algorithm for one member of the class, it can be used to solve all the problems in the class. P Versus NP ProblemThe P versus NP problem asks whether the class P = NP? Are there problems whose solutions can be checked in polynomial time, but can not be solved in polynomial time?Note that just because no one has found a polynomial time algorithm is different from showing that the problem can not be solved by a polynomial time algorithm.If a polynomial time algorithm for any of the problems in the NP complete class were found, then that algorithm could be used to obtain a polynomial time algorithm for every problem in the NP complete class.Satisfiability (in Section 1.3) is an NP complete problem. It is generally believed that P≠NP since no one has been able to find a polynomial time algorithm for any of the problems in the NP complete class. The problem of P versus NP remains one of the most famous unsolved problems in mathematics (including theoretical computer science). The Clay Mathematics Institute has offered a prize of $1,000,000 for a solution.Stephen Cook(Born 1939)

Các file đính kèm theo tài liệu này:

  • pptxchapter3_826.pptx
Tài liệu liên quan