09:22:56 >>Professor: good morning everyone. We will get started in just a minute. Okay let's get started. Good morning everyone. Last time we had finished up by talking about recurrences. We had seen in the case of a recursive algorithm like the divide and conquer convex hull algorithm we talked about before we ended up getting equations like this when we looked at the run time of that algorithm. We said the run time of an algorithm on input N could be expressed on the run time of a smal 09:23:39 Size input. It operates by dividing the set in two halfs and running the algorithm recursively on each. The total run time is the time taken to process the left hand of the points T of N over two plus the right hand of the points which is T of N over two so we had two times T of N over two plus the nonrecursive work. There's the time needed to split the time in two halves and talk the results from the recursive calls and combine them in to an answer for the original N points. That's 09:24:31 On the order of N amount of work. This we call a recurrence relation. The question is if we have a definition of the function T like this together with some base case saying maybe for T of N being three or less it takes constant time how can we solve this to find the asymptotic growth rate of this function T? We want to know will it be polynomial time or exponential time? If it's polynomial is a linear, quadratic? The first method we talked about for solving this recurrence 09:25:11 Is a recursive tree. Here at the root note we have the original problem. We are running the algorithm on input N. It makes two recursive calls to inputs of size N over two. That makes calls to problems of size N over four, et cetera. All the way until you get to the bottom where you hit the base case. We have sets of points that are three points or fewer like two point. That's the base case where we stop having recursive. 09:25:48 once we have this tree it helps us get at the solution. The first step is to work out how deep this tree is. If we call the root note depth zero then depth one, two, et cetera, the first thing we compute is how large are the size of the sub problems at each level? Here at the top level we have problems of size N. At one level down we have problems of size N over two. Then N over four, et cetera, in general at depth D we have sub problems of size N over two to the D. 09:26:25 We know once you get down to the bottom the base case we have problems of size three or less. That's when the base case takes affect. We can use this now to solve for how deep this tree is by saying in the base case the size of the sub problem has to be at most three. That gives us this inequality here. Saying the size of problem at depth D if D is the bottom layer of the tree has to be at most three. If we solve this we get D is on the order of log N. 09:27:04 That makes sense because each time you go down one level in this tree you are cutting the size of the problems in half. The log rhythm is what tells you how many times you can cut a number in half before you get down to a constant size. Now we know the depth of this tree is on the order of log N. That's the first thing we worked out. Now we can go onto add up how much work is being done over this whole tree. The first thing we do is count how many nodes there are at the 09:27:41 Tree. We know the size of the problem at each level. We need to keep track of the nodes. That's straight forward to do. We have one root node. That makes two recursive calls. We have two nodes one layer down. Each of those makes two recursive calls so we have four, eight, et cetera, at depth D we have two to the D sub problems. In particular, if we got down to the leaves and are looking at how many leaves there are they are on the order of N over three leaves. 09:28:19 We can see that by taking two to the D and plugging in this expression for D at the bottom most depth we got here. Or you can see since the base case is to have a set of three points if we start with N points then to divide it up in to groups of three we need to have N over three groups. That's how many sub problems we have at each level of the tree. Now, finally, we can solve the recurrence by adding up all of the work being done in each of these nodes. 09:28:53 At a particular node you have two parts to the recurrence. We have the recursive term that says for T of N we make these two recursive calls. We also had a nonrecursive part which is the actual steps. The actual calculations being done at this node that aren't part of the execution of the recursive calls. We can add that up across all of these nodes and that will give us the total amount of works done in the whole algorithm. It's convenient to do it layer by layer. 09:29:35 Now that we know the sub problem size and the number of sub problems that let's us count the total number of nonrecursive steps. At depth zero we have one sub problem of size N. This nonrecursive term if it's CN we will have CN times one sub problem so we have CN steps at the root node. If we look one layer down now we have two sub problems each of size N over two. Since the term is CN we have C times the problem size of this level which is N over two times nodes which is two. 09:30:14 In this case it works out to CN. In general, at the Dth level the problem size is N over two to the D so the nonrecursive work in a single node is C times N over two to the D, but then we have two to the D nodes. The total amount of work is CN. Now, we can conclude that the amount of work done at each level is theta of N in this case it's CN at level zero. CN at level one, level two, et cetera. There's a linear amount of work done at each level of this 09:31:05 tree. We can long N levels. We work that out before. The total work adding every level would be on the order of N log N. You got log N levels with an amount of work. Over all it's N log N which is what we claimed to be the asymptotic (INDISCERNIBLE). Before we move on. Any questions on how to use this technique? How did we find the depth? This is an important part. Let me go over that again. The idea was 09:31:42 If we look at how large the sub problems are at each level we get this expression by observing the fact they get divided by two each time you go down a little. At depth D we have N over two to the D. At the bottom of the tree when we get to the leaves of the tree when there's no more recursive we know the problems have to have size three or less because that's the base case for the algorithm. Once you get down to three points for fewer you don't make recursive calls. 09:32:47 You solve it directly. The question of how deep in this tree are the leaves is a question at when does this expression reach three or less? That gives us this inequality. We are saying how deep do you have to go so that N over two to the D reaches three or less then solving this gives us our bound of log of N for the total depth of the tree. Make sense. Any further questions? This is one good technique for solving recurrences by drawing this recursive tree. 09:33:31 A couple of things to note about this, one is that we were a bit informal here with exactly these expressions for the size of the sub problem and total work and so forth. I wasn't keeping track here of the exact depth I was keeping track of order. Here it could be three or less. The number of sub problems at the bottom is not exactly N over three but roughly that. You can make this reasoning precise, but in general it's often useful to do the recursive tree method in an inform way 09:34:08 As a method for guessing the solution to a recurrence. The details of the base of the log rhythm are there exactly N over three subproblems or is it a constant times that don't change the result you get, but to formally prove the solution to a recurrence we need to use a different technique unless you are careful in doing this reasoning. It's often best to start out with the recursive tree method to get a guess for the solution to the recurrence. 09:35:37 Then prove that solution in a separate step. That's what I will talk about next. That was method one for solving a recurrence. The note here is that it's okay to use the recursion tree method informally to guess or to make a good guess the asymptotic solution to a recurrence, but to be formal we need to prove the solution actually works. For that, this leads us to our second method of proving or finding the solution to a recurrence. 09:36:28 Which is more precise. The second method that we have is to prove a solution works using induction. This method is sometimes called the substitution method. We will see why in just a second. The idea is that we are going to essentially guess a solution through some other method. Maybe the recursion tree method. We will substitute it in to the recurrence to prove it satisfies the recurrence. Let's look at an example. 09:37:34 Let's say we have the same recurrence we had before. We have T of N is two times T of N over two plus some extra work on the order of N. I will say O of N. We will do an upper bound. Let's try to guess that T of N will satisfy being O of N log N. We could get this from the recursion tree method. Let's say we have this guess. We think T of N is O of N log N. How can we prove this? We need to prove what does it mean for T of N to be O of N log N? 09:38:30 Using the definition of the big O notation we need to prove that there exists some N zero and C. Constants N zero and C. We can say greater than zero. Such that for all N greater than N zero, so beyond a certain point for all sufficiently large N, this function T of N is at most that constant C times N log N. This is using the definition of the big O notation. Remember what this notation means is that for sufficiently large N, T is bounded above by some constant multiple of N log N. 09:39:30 This is what we need to prove. The way we are going to do this is we will fix N zero and C. Pretend -- they need to be some constants. Let's not worry about their exact values now. Let's say we fix some values for them and we will see be low how we should fix these. Fix those two values and prove that T of N is at most CN log N for N greater than N zero by induction on N we will use the principle of math induction. Here we have a statement we want to prove for all integers N. 09:40:15 Beyond a given point. A good method for doing that is to use the method of induction. We have a base case for the smallest value of N. Then we will show if this inequality hold as at a particular value of N it also holds for N plus one. Those of you who may not be familiar with math induction we posted last week resources on piazza including hand outs with practice problems doing proofs by induction and more detailed description of how induction works. 09:41:20 How do we do this proof? Like any proof by induction there's a base case and an induction case. I will write it explicitly here. There's two cases. The base case, we are tying to prove this holds beyond some value of zero. Our base case is N equals N zero. We need to show that T of N zero is at most C times N zero log N zero. We need T of N zero is at most CN zero log N zero. A question that's coming in here is how did we know we had to make a guess of N log N? 09:41:56 In this case we got that from the recursion tree method. The result of the computation we went through last time from the recursion tree method suggested that the solution to this recurrence is order N log N. The recursion tree is one way. You may have a guess for a variety of different reasons. The recursion tree is one way of doing that. Sometimes you may also be able to just guess because you have seen similar recurrences before. 09:42:40 It doesn't matter how you come up with the guess. The induction technique is for verifying once you have a guess that it actually satisfies the recurrence. The induction technique can help you find some details of the solution. Here we are trying to show that T is bounded by some constant of times N times T. The induction will help us find out what value for C is necessary. Sometimes you can instead of guessing an exact function you can guess some function that has an undetermi parameterrer 09:43:21 You have a template that has parameterrers like C. While doing the induction you see what conditions have to be true for those constant values so the proof will go through. Sometimes the induction can help you figure out exactly what function you need to be the solution. You still need a guess for the form of the solution. A question here will we always need to use the definition of the big O notation? Since what we are trying to prove is T of N is O of N log N if you try to prove a stateme 09:44:04 That involves the big O notation you need to use the big O notation. If you are proving it's omega of something else you use the definition of the omega notation. When trying to prove some statement it's the case you need to use the definitions of the terms involved in the statement. Here we need to know what it means for T of N to be big O of N. What I have written out here is this line here. This is the formal definition of this notation here. 09:44:44 I'm saying in order to prove this line we are going to use an induction on N where we assume N zero NC are just constants. What's the induction hypothesis? It's just this. What we are trying to prove is there exist these constants such that for all N greater than N zero this inequality holds much the induction hypothesis is that this inequality holds for all N up to some point. You have to prove the base case where you have the smallest value of N. 09:45:25 That's what we are doing here. The base case is N is N zero. Then we establish it here. Then in the induction case we are going to assume this inequality holds up to some value of N and then we want to prove it holds for N plus one. We will see that in a second. Just to continue with the base case here we need to show that T of N zero is at most some constant times N zero log N zero. How can we do that? Well, what we can do is notice T of N zero is some number. 09:46:11 T is a function. You plug in some particular number N zero you get some number here. You pick C large enough so this is bigger than that particular number. The way that could fail is if N zero was one then log of one is zero. No matter how big you pick C this will not get bigger. We will take N zero to be strictly greater than zero or greater than one rather. If it's one we have a problem on the right-hand side being zero. We just need N zero to exist. 09:47:11 We have freedom of how we pick this value. Let's pick a value greater than one. Then pick C large enough to satisfy the inequality. If we pick N zero greater than one then N zero log N zero is positive. We can pick C large enough so the right-hand side is bigger than the left-hand side. Usually the base case in a proof like this is strait forward because for finite input size the algorithm takes constant time. If N zero is fifty T of fifty is some finite amount of time. 09:48:19 All you have to insure in the base case is your guessed solution for the recurrence is at least that large at N zero. That's the base case. Let me move onto another piece of paper so we have more room. What I'm saying here is this is the inequality. I mean this inequality up here, we are picking C large enough to satisfy that inequality. Let me write it again. We are trying to prove, remember, that T of N is at most C of N log N for all N greater than or equal to zero. 09:49:19 Now we need the induction case. How will that work? The idea is -- oops. The idea is we are going to assume the induction hypothesis holds up to some point. We will assume T of M is less than C of M log M for all M less than some point N. The idea is we will assume the hypothesis holds up to some point, N. Now we need to prove that it holds at N. That will allow the induction to go through. For those of you who looked at the notes and seen the different forms of induction. 09:50:00 This is strong induction. We are assuming the hypothesis holds for all inputs, all values smaller than the current value. Now we have to prove using those that it holds for the current value N. We already proved we are assuming we have proved the algorithm is bounded by this amount of time for inputs of size less than N. Now let's show it's also bounded by our desired inequality for inputs of size N. Now that we have assumed that, let's look at what the run time T of N is. 09:50:48 A question, can we explain why the induction case is like this again? The idea is here's what we are trying to prove. T of N is less than C of N log N. For all N at least all sufficiently large N. When you prove something for all integers N a good way to do that is by induction. You can say for a given value of N let's prove if the statement holds for all values less than N then it also holds for N. If you can do that and you can prove it holds for N zero it's going to hold for N zero plus 09:51:30 Because it holds for values less than N zero plus one. It holds for N value plus two. By repeating that you would show it holds for all integers. The principle of math induction says if you have a property that holds for some value N zero and if the truth of that statement for all values less than N implies it for N then it's true for all integers greater than. Okay. Here we are assuming, let's say the algorithm runs in this amount of time for all smaller inputs. 09:52:17 If we can show it runs in this amount of time for N as well that is enough to complete the induction. For any size of input it's going to make recursive calls for smaller sizes of the problem and by induction we can assume those run in the desired time. The over all thing will. Our induction hypothesis is that the amount of time on problems of size M which are strictly less than N satisfy our desired bound some input times the constant log of the input size. 09:53:12 By our recurrence, we know that T of N is two of TN over two plus O of N. That was what we are starting out with. That's our recurrence. By the induction hypothesis, the assumption we have here, by the assumption we made, we have T of N is at most, well let's plug in for T of N over two this equality. Because N over two is strictly less than N. Our assumption applies. We get this is going to be less than. If we plug in T of N over two. C times N over two log N over two. 09:54:15 That's plugging in a bound for T of N over two we get from this assumption plus O of N which means this is bounded by some other constant let's call it D times N. Remember, big O of N means this term is bounded by some constant D times N. At least for sufficiently large N. We can pick N zero large enough that we are over in the regime where this term is bounded by D of N. I see. One confusion is why are we looking at M less than N don't we show it works for things greater than N? 09:55:03 There's different ways. If it holds for inputs of size N it holds for inputs of size N plus one. In this case, that's not convenient. The recurrence expressions the the value for N in terms of the value of N over two. We are not relating T of N and T of N plus one it's convenient to use the strong form of induction. Let's assume the hypothesis holds for all inputs be low the current size we are looking at. If we want to prove it for M we assume the hypothesis is true for everything be low 09:55:48 N. Now we will prove it for N itself. That let's the induction go through. The argument I said before is let's say we have a base case of zero and you prove if it holds for all values less than N it holds for N then it would follow that the hypothesis would hold at one. Because the only input less than one is zero which is the base case you already proved. Then from those two it follows that it holds at input size two as well. The only values less than two are one and zero. You prov both. 09:56:31 So it holds for a value two as well, et cetera. You are able to show for all integer values the property holds. It's often the case when reasoning about recurrences like this were your dividing the input by some factor like this N over two that you will use strong induction or strong induction is the most convenient way to do the proof rather than the simpler form of going from N to N plus one. It's usually easier to assume the hypothesis holds for everything less than N. 09:57:25 Then dedues it holds for N. It's up to you. In some cases you may be able to use a weak induction too. Where you go from N to N plus one. That may be useful if you have a recurrence where the right-hand side has N minus one instead of N over two. Let's continue with the proof here. We are saying four T of N we know by the recurrence it's at most two N two times T of N over two plus a term that's O of N. If we plug in our hypothesis here for T of N over two because N over two is less than 09:58:38 N our assumption applies and we get this bound on this term. For this term we have DN for some constant D. Now we have this bound. We can simplify this a bit. If we cancel the two's here we will get CN log N over two plus DN. Now we can simplify this more by using the rule for the log of division. If we do that we get CN. This is going to be log of N minus log of two. I split those terms up. That's expanding the log rhythm there. 09:59:43 Let's combine these two terms. We have a couple of different growth rates in these terms. We have N log N term and these terms are linear. Let's combine those. We have CN log N plus what's the coefficient? We have D minus C log two times N. All I did there is I combine these two terms. We have the DN term here and the minus CN log two term here. What we want to show is that T of N is at most C times N log N. That's what we are trying to prove. 10:00:44 What we can observe is if we take C large enough that will be the case. If we take C to be greater than D over log two then this would be at most C times N log N if we take C to be at least D over log two. Everyone see that? Remember, C is just this constant we get to pick. We saw here that we have to pick C large enough. We could increase C beyond that. D is a constant hidden by the big O notation. We don't control D. 10:01:24 It's some number. We take C large enough to satisfy this inequality over here. Also to be at least D over log two. In which case this term if the parathese will be zero or negative. So, this will be at most CN log N. Where did this bound come from? I'm trying to show T of N is at most C times N log N. We can do that by making sure this term is negative. We have this term which is what we want. We have this extra term here we want to get rid of. 10:02:19 If we pick C large enough this term will go negative. This whole thing will be bounded above by CN log N. I have worked out exactly how big C has to be. It's not important. The essence is there's some value for C such that this term here is negative. We would have established what we want which is T of N is at most CN log N. This establishes the induction hypothesis at N. We assumed it held for all values less than N and now we proved it also holds for N. 10:03:15 Now we have the base case and the induction case for this induction. That means that the hypothesis actually holds for all N, in this case for all N greater than N zero. By induction T of N is at most CN log N for all N greater or equal to N zero. Which is what we wanted. To clarify, if this right term is negative then the sum of these two terms is going to be bonded above by just the first term. T of N is at most CN log N. 10:04:17 If we have shown it's equal to CN log N plus something negative then it's less than just that first term without including the negative term. If you show something is bigger than N minus five sorry if you show something is at most N minus five it's certainly at most N. This gives an example of how you can formally prove a function defined by recurrence satisfies an asymptotic bound. The idea, to recap, was we are going to set up an induction where our hypothesis is the bound we want. 10:05:00 If it's a big O bound we want an upper bound with a multiplication constant. If it's omega statement it's a lower bound with some constant. We can establish this bound by induction on N. We have some base case usually being N zero. The point at which the big O kicks in. Then an induction case showing that if the inequality holds for all inputs of size less than N strictly less than N it holds for N. The general approach there is you use the recurrence to expand T of N in terms 10:05:41 Of small problems. For small problems the induction hypothesis applies. You plug that in and do simplifying. Hopefully, you end up with the same inequality you were trying to prove. This is exactly the induction hypothesis at N. This establishes the induction case and by induction it holds for all N. There was a question about why these lines have equal signs rather than less than. It's algebra manipulation. I'm not losing anything here. 10:06:21 These are equal to each other. Here I cancel the two and expanded the log rhythm. At this step and this step I actually introduce an approximation that we have a bound rather than exactly manipulating things. There's a question about this inequality here. Since it's greater than or equal to what happens if C is bigger than this value? If you look at this term here if C was exactly D over log two then the log two would cancel and you get D minus D which is zero. 10:07:00 If C is bigger log two is positive so this term will get bigger but D is staying the same. This quantity in parathese will go negative. This term, the point is if we take C to be as large as D over log two this term is zero or negative. That's enough to show T of N is bounded by just C of N log N. Here we get to pick C because we were trying to show if we look back to what we are trying to prove is there exists some value of N zero NC. 10:07:43 Such that this inequality holds. We get to pick. The proof helps find the conditions we need to satisfy when picking C. We need to pick C large enough this inequality holds. We also have to pick C large enough this inequality holds. That is fine. We can pick a value for C big enough that both hold and this induction proof goes through. Another question. When we did the recursion tree we got a theta bound for the solution to the function T of N. 10:08:11 Where as, here we are only getting the big O. That's right. This argument was only doing the upper bond. We prove an upper bound here. If you want to do a theta bound you could do a second induction which is relatively similar except now you are trying to prove T of N is omega of N log N. It would be the same inequality here except now we have a greater than or equal to instead of less than or equal to. 10:08:53 if you prove both of those two statements, the upper and lower bound then you have established T of N is theta of N log N. Here just because I want to give an example I only did the upper bound. The lower bound is similar. You can structure the induction in the same way. Now the hypothesis would be different. Instead of less than or equal to it's greater than or equal to. That's the second technique for proving or finding the solution to a recurrence. 10:09:35 Is proving it by induction. This is a formal way that you can verify, a guess you can get through the induction or through the recursion tree method. There's a third method which I want to just briefly mention here because you will find it useful on the home work. Which is, a plug in play kind of method that uses the recursion. It's proved using the recursion tree technique and allows us to solve a lot of recurrences by plugging in to a general formula 10:10:47 Rather than going through the construction of the recursion tree. The third method for solving recurrences is, for recurrences for the form (Writing on board). This is from divide and conquer algorithms. Let's divide in to A sub problems each of which has size N over B. If you have a recurrence of this form we can use the master theorem for divide and conquer recurrences. The reason it's called the master theorem is it's a general result that works for many recurrences of this form. 10:11:33 It's not for a specific recurrences like T times T of N over two. It let's you solve a lot of recurrences that have the same general form by plugging in the values of A and B and something about this other term F. I will state this theorem and show you how to use it, but if you want to see a proof it's proved using the recursion tree method. There's a proof in the CLRS book. I can put a reference to it on piazza. We will not prove it. The proof is by recursion trees. 10:13:03 You can see the CLRS book if you are interested in how that works. The statement of the theta. If we have a function T of N satisfying a recurrence which has this kind of general form. T of N is A of A times T of N over B plus some extra term F of N. Where A is some constant greater than or equal to one. B is a constant strictly greater than one. F of N is just some arbitrary other function, or this theorem also handles the case where you don't have quite this recurrence. 10:13:36 An issue that happens when dividing a problem in to sub problems is what if the input size is not evenly divided by B? For example, in the divide and conquer algorithm we had for convex hull if you are dividing the set of points in half what if you have five points? Then you can't divide it equally. Then what you end up with is some sub problems that are slightly different in size than others. Some would be the floor of N over two. 10:14:30 The other would be the ceiling of N over two. They may be off slightly. They are the closest integers to dividing it by two. This theorem handles that case which I was glossing over when looking at the recursion tree method. Or the same recurrence replacing each T of N over B term with either T of the floor of N over B or it T of the ceiling of N over B. In case you have not seen this notation before. 10:15:24 this is the floor and this is the ceiling. Which means N over B may not be a whole number. A floor is you round down to an integer and ceiling is you round up to an integer. If you have a recurrence like this or where instead of two N over B (INDISCERNIBLE) you could have T of the ceiling of N over two plus T of the floor N over two. You can have the floors and ceilings in there. Then defining what's called the critical exponent. We will call it C. 10:16:24 To be the log base B of A. We have two constants A and B. If we take the log base B of A let's call that C then we have three cases in which this theorem applies. One, if this extra term in the recurrence, the F of N, this is the nonrecursive part, the time needed to combine the recursive calls and get the final result. If this grows slowly enough, in particular, if it's O of N to the C minus ep salon. If C is two it says F of N grows slower than N [KWAEUR]ed. 10:17:12 It's N squared minus some constant. It could be small as long as it's a positive constant. If F of N grows at most that quickly then T of N is theta of N to the C. The idea here is what this constant C is basically telling you is how much work you have to do just from solving all of the recursive calls. This is computing in terms of how many sub problems there are and how big those are. What is the growth rate of the work needed to do all the recursive calls? 10:18:11 That amount of work is on the order of N to the C. If this F of N function grows slowly than N to the C the over all solution to the recurrence is dominated by the time needed to do the recursive calls and so you get T of N being theta of N to the C, but if F of N takes longer we could have, if F of N is theta of N to the C then T of N, the amount of time is a little more. It's N to the C times log N. These are supposed to be theta here. 10:18:56 This is big O. We say if F grows like N to the C minus, ep salon or slower. If it's small enough the solution is N to the C of the it doesn't matter how much time F takes as long as it's at most this. It's dominated by the recursion calls. This term does not effect the recurrence much at all. Now if we say if F of N grows like N over C, this is theta, it grows at least as quickly as N to the C and at most quickly as N to the C. 10:19:49 If it's the same order as N to the C the amount of work you do is larger than N to the C by a factor of log N. What if F grows faster if F of N is omega of N to the C plus ep salon for some ep salon greater than zero. We are saying it grows faster than N to the C and it grows -- it's as most as a polynomial with a degree strictly higher than C. In that case, we want another bound for this. In this case the run time of the algorithm is dominated by the second term in the recurrence. 10:20:24 The recursion part is asymptoticly not as much work as doing the second term at the top level. In order for that to work we actually need one more technical condition to make this completely true which is almost always satisfied. By all the functions we see in this course it is satisfied. We need to write it here for this to be precise. We need -- if you look at the work being done one level down. If you look at calling F on a sub problem. 10:21:19 Instead of calling F on the original problem size N if you look at the sub problem that's N over B and we do it A times this is all the children one level down in the tree. This needs to be at most D times F of N for some D strictly less than one. At least for sufficiently large N. This says the amount of work being done in this nonrecursive term one level down is strictly less than the amount of work being done at this level in this F term. 10:22:03 If this is the case, which I mentioned basically will always hold for the functions we will look at but you have to check this inequality holds. Then T of N is dominated by the nonrecursive term. This is theta of F. In this last case the run time of the whole algorithm is dominated by the latter term. It's asymptoticly just as long as it takes to run this term at the top level. All the recursive stuff is asymptoticly not important. It's dominated by the work done just at the top level. 10:22:46 Basically, these three cases correspond to where the recursive part dominates. The two have equal amounts of work. Recursive and nonrecursive part. That gives the extra log factor or where the nonrecursive part dominates. As I said, we will not prove this theorem. It's using the tree technique we saw before. The details are in CLRS. The important thing is it gives an easy way to solve recurrences by taking A and B and function F or the recurrence you are interested in. 10:23:37 Computing C. Checking if you are in these cases. If you are you read off the answer. Let's look at a couple of examples in the last few minutes here. Suppose we have a recurrence like (Writing on board). This would be like saying the algorithm splits the problem in to nine sub problems of size N over three. Then it does some linear amount of work to merge those. How do we apply the theorem? We need to compute C. What do we have in this case? 10:24:42 C is log base B. That's log base three of nine which is two because three squared is nine. Which case are we in here? We need to compare F of N which is this term here, N to N to the C which is N squared. We are in case one because N is O of N squared minus ep salon. We are in case one because N is O of N squared minus ep salon. We are in case one. By the theorem, then we get that T of N is asymptoticly N to the C which is N squared. 10:25:28 We can just read off the solution to that recurrence. Let's look at one more example. We will finish this up on Friday. I will give an example of the third case as well. Let's say we had T of N is T of two N over three. You are doing one sub problem of size two thirds of the original plus some constant amount of work. Let's compute this exponent C. It's the log based B. That's the log based two thirds of one sub problem which is zero. 10:26:20 Log to any base of one is zero. We are in case two because one is theta of N to the C which is N to the zero which is one. Now we can read off the solution again as being N to the C log N. N to the C is N to the zero which is one times log N so we get log N. You can see this gives a simple way to solve recurrences by plugging in these constants and seeing if these three cases apply. That's all the time we have for today. I forgot to mention. 10:26:58 Home work one is posted. It's online. I put test assignment there as well. If you want to try out grade scope for the first time to make sure you can upload things properly. Question about the wait list. Sorry about that. Unfortunately, it's looking like we can't get another TA. We are still hoping to be able to expand the enrollment a little bit, but we are not going to be able to tell that until this weekend. We need to see how much attendance there is in section. 10:27:27 To see how much we can expand without burden the TA's. I have saved the wait list. There's hope for you if you are on the list. You can stay involved with the announcements and everything. I hope to get a few people in but probably not everyone unfortunately. I will keep you posted on piazza and I will hang around now if you have more questions. Other wise see you all on Friday.