09:22:15 >>Professor: good morning everyone. We will wait another minute before getting started. Okay let's get started. Hello everyone. I hope you had a nice three day weekend. To recap a bit what we did last Friday since it was awhile ago we talked about this paradigm of divide and conquer algorithms. The idea is you divide the problem in to subproblems which you solve recursively for free. All you have to think about is how you can combine 09:22:56 Solutions to each of those subproblems in to an over all solution for the original problem. That's divide and conquer. Today we will give examples of divide and conquer algorithms to help you with your thought process when trying to come up with new divide and conquer algorithms for a new problem. What we said last time, the whole reason why divide and conquer is useful is it makes it so instead of solving the entire problem you assume you can solve smaller 09:23:32 Versions of the problem and you divide the problem up and recombine the answer to get an over all solution. That's easier than solving the problem directly. We saw that with merge sort where we sort a whole list. We can assume we have split the list in two parts each of which is individually assorted in increasing order. All we have to do is figure out how to combine these two lists in to an over all assorted list. For this problem that's not very hard to do. 09:24:06 We had this procedure that I will not go over in detail again. The idea was if you have the two assorted lists you can compare the first two elements of each list and take which ever is smaller and iterate through both of them and eventually build up the entire combined list. This takes linear time in the sizes of the original two list. That was a simple procedure for merging two assorted lists. That gave us this over all sort of algorithm. 09:24:47 For merge sort. We have a base case. If you have few elements you can sort it directly. In practice people usually stop before N equal us one and switch to another sorting algorithm. If we have a list of one or less it's already assorted other wise we divide the input in two halves. The first and second halve. We sort those and call the merging procedure to combine those. It's a simple algorithm. We saw in section that you can come up with a recurrence for this algorithm. 09:25:25 Seeing there are two recursive calls each on a problem half the size of the original. The nonrecursive work you are doing here and here is linear in total. You have a recurrence that you can solve and get N log N run time over all for merge sort. That was the idea of how we would come up with this algorithm. Once we decided we will split the array in two halves we need to think about how do we combine the assorted versions of the first and second half in a over all assorted list. 09:26:06 Once we have done that the algorithm writes itself. To prove this actually works we also illustrated that last time. We do that using induction. This is a general technique that you will use for proving the correctness of any recursive algorithm. The idea is formally what we want to show is for any list of size N the output of merge sort on this list is a assorted permutation of A. Since we are proving this for any size of list it's natural to do this by induction. 09:26:47 We will reason it works for lists of size one and it will follow it works on lists of size two and work on lists of size three, et cetera. By induction we will find it works on lists of any size. That's what we did. The idea was our base case corresponds to the base case of merge sort itself. If we look at the pseudo code the base case of the algorithm is when you give it a list of size at most one. N is at most one. The algorithm returns A. 09:27:23 We know A is assorted because it has at most one element. The base case is fine. For the induction case we will assume when tying to prove merge sort works on lists of length N that it works on lists of any length strictly less than N. The reason that will help us is that implies these recursive calls work correctly. These two calls are for lists of half the length of the original. These are strictly smaller. If we assume the algorithm works correctly on these two recursive calls, 09:28:07 Then B and C will be assorted. We need to argue by the correctness of this merge procedure B will be correctly assorted and it works for lists of length N. For induction it will work for any arbitrary size list. That's the idea. While the details of this reasoning were specific to merge sort the general approach of have the base case correspond to the base case of the algorithm, the induction case let you assume (INDISCERNIBLE) works. That procedure using strong induction 09:28:53 Will work for any kind of recursive algorithm. That's generally how you want to do this correctness proof. Before we move on from this any questions about what we talked about last time? As I said, the goal for today is we are going to look at a couple more examples of divide and conquer algorithms to help you think about what is involved when you are thinking about how to come up with a new divide and conquer algorithm. 09:29:31 There are a couple of things you need to have in mind. A question from the chat about the home work. The home work deadline was extended to Thursday. I sent an announcement on canvas. If you didn't get that announcement for some reason make sure you are added to canvas. I thought all the people from the wait list who gotten in have been added. If you didn't get that announcement check your email. If you didn't get it let me know. 09:30:24 I can add you to canvas. If you already submitted the home work because you missed the announcement you can submit a new version on grade scope. If you want to work further on it you can do that. Designing divide and conquer algorithms. As we said last time, there are only two questions you have to think about when you are designing this kind of algorithm. One is how to divide the over all problem in to subproblems. 09:31:25 This tends to be straight forward. There's usually a natural way to divide the problem. If you have an array maybe you will cut it in half. If you have a set of points maybe will you draw a dividing line to separate them in half like in the convex hull algorithm. What is more difficult is once you have the solutions to the subproblems putting them together in to an over all solution. What sometimes happens when designing a divide and conquer algorithm is 09:32:12 You may think of a natural way to divide it up. It's not clear how you can combine the subproblem solutions to get an over all solution. You may have to go back and maybe there's a different way of dividing a problem up. Let's look at divide and conquer algorithms and see how we can answer these two questions. The first problem we are going to look at today is integer multiplication. The idea is we have two integers and we want to multiply them. 09:33:00 Given integers X and Y compute X times Y. You will recall that in our computation model the RAM machine we assume an arithmetic operation like this can be done in constant time. Why do you need an algorithm for this? The reason is that we only assume that integers that fit in registers, that fit in a single place in memory can be operated on in constant time. If you have a machine with thirty two bit registers or sixty four bit registers. 09:33:46 The constant time to work with those numbers but if I give you a thousand bit integer that doesn't fit in one register. You have to store that in an array where you use several parts of memory to store the number. You need an algorithm to compute the production of two such numbers. It's not a built in operation in the machine. It's important to note on the RAM machine, it can only multiply numbers in constant time if they fit in the registers. 09:34:29 Other wise, we need an actual algorithm. Basically saying how can we do the multiplication of these long numbers in terms of smaller operations we can do on the machine like thirty two bit multiplication. Something like that. Think on your actual lap top for example the CPU in your lap top or other computer has a primitive operation built in. Assembly language instruction for multiplying two registers together. It doesn't have an operation built in to multiply 09:35:28 Two one thousand built numbers. You need a sequence to do that computation. This is integer multiplication. If I give two end bit numbers how quickly can you compute the product? One algorithm for this is an algorithm that we all learn in elementary school for multiplication. You may not have used this algorithm in a long time, but you will remember we have this algorithm which says if we want to multiply thirty seven by hundred fourteen 09:36:14 You have this method where by you take individual multiplications of the first number by each digit shifting things over one time and you add them up. The idea is first we will do thirty seven times four. We will get four times seven is twenty eight. We will eight here. We carry the two over to that and we get five there. Then we shift over the value and multiply by one. Then you get thirty seven. Then we shift by two and multiply by one again. 09:36:50 You add these all up to get the final result. You get eight there. Here you get twelve. Carry the one. You have eight plus three here. Carry the one again. You get your final result. Hopefully, I got this right. It doesn't matter. You will recall this algorithm. The idea is we are multiplying the first number by each digit of the second number in succession. Shifting over the results by one each time and then you add these all up. 09:37:42 How long does this take if X and Y have N digits? How long can this naive algorithm take? I have not written pseudo code for this algorithm. Thinking about how we did it here what do we have to do in each step? We have to multiply every digit. We have to multiply this thirty seven by four. The first digit here. That requires us to do a linear amount of work. We have to do four times seven. That may carry things over. We have to do four times three. 09:38:24 That may carry over, et cetera. We have to take over every digit of thirty seven and multiply by four. There's a linear amount of work to compute this first line. We do it over again with the second digit and third digit, et cetera. Each iteration, computing each row takes linear time. You have to do a linear number of single digit multiplication. How many iterations are there? There's one for each digit of the second number here. 09:39:18 Then you have to add them all up. Adding up all the N things. You have N numbers each of which is potentially N digits and we have to add them up so it's quadratic time over all. Each iteration takes linear time and there are linearly many iterations. The procedure over all in general is going to be theta of N squared. We have roughly N iterations. If we had exactly N digits down here we have N iterations. Each takes order of N single digit multiplications. 09:40:00 You have N squared in total. A question. Aren't there log N iterations? No we have one row here for every digit of the second number. Here there's three digits. We have to have three iterations. You are saying the number of digits is a log. It's true the number of digits is the log of the integer itself, but if you think in terms of how many bits it needs or digits you need to store the number that is not a log. The input to this algorithm is two N digit numbers. 09:40:33 The size of the input here is of order N. We are running. We need to measure the run time of the algorithm in terms of the size of the input. That's our convention. We always have a notion of size of the input. Here the notion of size is the total number of digits of these integers X and Y. If Y here has three digits then we are going to have three iterations. The running time is going to be N squared where N is the number of digits. 09:41:22 It's true the actual value of Y is exponential in N. The amount of memory needed to store Y, the size of the input we are calling N. Does that make sense? Another question would be if they each, do they each have N digits or is N digits the combined number? I will say each of them separately have N digits. It doesn't matter. It will not change the asymptotic results if one has more digits. Let's assume both X and Y have N digits. 09:41:57 The input size is two N for this problem. Another question was if we want to multiply three numbers would the run time be N cubeed? If you ran the algorithm to multiply the first two that's N squared. Now you run the algorithm again to multiply in the third number. You have to keep track of the size of the first number. If you multiply X and Y together and they each have N digits how many digits can the product X Y have? 09:42:35 In the worst case. Not N squared. If you have an N digit number and mulitply it with another number the total number of digits you can have in the worst case is two times N. Think about the algorithm here. How far over to the left can you shift this? It's N bits over. Maybe you have one more digit from a carry but in general you have two N digits down here. Then you have to run this algorithm again on one number with two N digits. 09:43:17 And another number with two N digits. It's N squared. For multiplying three things it's still N squared. Good question. This is the naive algorithm for integer multiplication we all learn. It takes N squared. This is a problem if you have a million digits of precision. A library that people use for arbitrary precision arithmetic could not use this algorithm if you have very precise numbers or large integers because N squared grows too quickly. 09:44:14 We can use divide and conquer to improve on the running time of the algorithm. Let's see that. Let's think about how we would design a divide and conquer algorithm for this integer multiplication problem. The first question we have to answer is how are we going to divide up the inputs so that we get two or more subproblems that are smaller. The first question how do divide the input. The natural idea you may have here for how to divide up the input is to split the digits in half. 09:45:08 The simple idea for splitting is split the digits in half. For example, if we had some large number like forty two sixteen. Let's split this in half. We will deal with the forty two and sixteen separately. We do that by writing this in terms of the higher order digits and lower order digits. This is forty two times ten squared plus sixteen. I split this in two parts the forty two and sixteen where the forty two is shifted over by two digits. If we said forty hundred. 09:46:03 If we have forty two times ten squared. Then we add the lower digits on and get the original number. This is one way of splitting up the input is we take the first half of the digits and second half of the digits. That's an example. In general, given, let's say we have at most N digit integer X. At most N digit integer X. We have X can be written in two parts. We call the high order digits X one times ten to the N over two plus X zero. The idea is we take the first N over two bit. 09:46:43 We will call those X one. The second N over two bits we will call those X zero. As usual let's assume N is a power of two here so N over two will always be an integer and we don't have to worry about that. Does everyone see what I'm doing? I am saying given X we can split X's bits in two the first and second half and the relationship between those is this equation. If you take the number represented by the first half of the bits and mulitply by ten to the N over two and add the number re by 09:47:27 Represented by the second half of the bits you get back the original number. Everyone see that? Why is it addition rather than can to nating. We are doing multiplication by ten to the N over two. The first half the bits. The second half of the bits. If we did forty two plus sixteen that doesn't get us back to forty two by sixteen. We have to multiply by ten squared so when we add the forty two gets shifted in to the right place. It's true if you 09:48:11 Can to nate these numbers you get it back. In terms of values what does it mean? We are treating these as integers. X one is forty two and X zero is sixteen. The question is how do we recombine those to get back the original X? You shift X one over by N over two zeros. That's multiplying it by ten to the N over two. Then you can add in at zero and that gets you back to the original. Another question of why don't we apply a floor or ceiling? 09:48:58 I will assume N is a power of two to simplify things. In general we have to be careful and have a floor or ceiling. Let me write this down. We are splitting X in to the first N over two digits which we are calling X one and the last N over two digits which we are calling X zero. That's what we are doing here. Let's do the same thing for the integer Y. Remember what we are trying to compute here is compute X times Y for some integers X and Y. 09:49:50 We are also going to do, likewise, we can write Y as some first half of the digits Y one times ten to the N over two plus the second half of the digits Y zero. Now that we have done the splitting let's see how we can actually use this splitting to compute the product. Remember what we are trying to find is the product of X and Y. Let's write that in terms of this expansion. Then what we have is X Y, the product, if we plug in these expressions we get 09:50:31 This is X one times ten to the N over two plus X zero times Y one times ten to the N over two plus Y zero. That's just plugging in these expressions. If we expand this out. Just multiply this. We get four terms. First we have X one Y one times ten to the N over two which is ten to the N. We have X one Y one times ten to the N plus, now we have the cross terms, we have X one, Y zero. 09:51:23 And X zero Y one times ten to the N over two. Finally, the last term is X zero times Y zero. All I did here was expand out this product. We have two terms here and two terms here. We get four combinations and I have written them out here. One of them has ten to the N. Two of them here have ten to the N over two. The last one is just X zero Y zero. Notice now that we have done this, this expression has four multiplications in it. 09:52:13 Here we have to compute what is X one times Y one. Here we do X one times Y zero, et cetera. These four multiplications are all of N over two digit numbers. Remember, the point of having X one and X zero and Y one and Y zero is they are half the size of X and Y. X and Y had N digits. Now X one and X zero and Y one and Y zero have half as many digits. It's the first and last N over two digits. These are multiplications of N over two bit. 09:53:03 Not bits. Digit numbers. We have now actually successfully reduced the original problem of computing this product. The multiplication of two N digit numbers in to subproblems that are smaller. Now we have multiplications of N over two digit numbers. We could do these four multiplications recursively. This would give us a divide and conquer algorithm. We would take the original number. We split them up in to the smaller parts. 09:53:39 X one (INDISCERNIBLE). We can find these products of N over two digit numbers. Now we have to do a little nonrecursive work. How do we do the recombination? We have to evaluate this expression. What else do we need to do after we compute these products? You need to do the multiplication by ten to the N and ten to the N over two. That's easy. That means add N zeros on the ends or add N over two zeros at the end. 09:54:23 All we have to do to multiply ten to the N is shift the number over by N digits. The multiplications by ten to the N or ten to the N over two are just adding N or N over two zeros. Does everyone see that? If you want to multiply by thousand you don't have to use the grade school algorithm for that. You can add three zero's on the end of whatever number you have. Doing these multiplications we can do that easily. That's linear time. 09:55:10 We have to add N more zeros or N over two more zeros. We also have to do addition. We have three additions here. Addition we can also do in linear time. The elementary school algorithm you have for addition is linear. You go through the digits and add them one by one keeping track of carries. The additions can also be done in linear time. The nonrecursive work here, assuming we already have computed these four products 09:56:05 Recursively the rest of the work we have to do mainly multiplying by ten to the N and ten to the N over two and doing these three additions that can be done in linear time. Let's get a recurrence for this algorithm to see the over all amount of time we need. There are four recursive calls. We have these four products we have to compute recursively. As there are for recursive calls we have the time needed for an N end X and Y is four times the run time for problems with half as many bits. 09:56:55 That's to compute these four expressions here. Plus, a linear amount of work to do this shifting by N or N over two and these three additions. We will have theta N nonrecursive term. If you solve this by plugging these in to the master theorem what you get is, it turns out we are in the case where the recursive part dominates and you have the exponent be log base two of four which is two. You get this being theta of N squared. 09:57:54 Now we have a bit of an issue. We did all this work trying to do the division in to subproblems and recombining. It's not any faster than the original grade school algorithm. A question. How did we get N over two? We have to think about when we do the recursive calls to compute these four products how big is the size of the input for those recursive calls? X zero and X one and Y one have N over two digits. We took X and split it in two halves. 09:58:36 These recursive calls are multiplying together to N over two digit numbers. The time to multiply is four times the time to multiply two N over two digit numbers plus the extra work for the additions and ten to the N over two, et cetera. Make sense? Our first attempt at coming up with a divide and conquer algorithm it worked in the sense that it's a correct algorithm. It gives the right answer. Unfortunately, when we compute the run time it's not asymptoticly faster than the original. 09:59:20 Fortunately, there is a trick we can use here. This was a clever trick that was discovered in the sixty's and was surprising at the time because people thought the grade school algorithm N squared algorithm may be optimal. This faster method I will show you really shocked people. We are almost there with the algorithm we have so far. We have to do one little improvement to this. Here's the idea. We are going to start in the same way we had before. 10:00:01 We will split things like this. Just as we have done it here. We are going to make a little observation. The trick here is let's consider the following product. Let's define some new quantity I will call it Z which is X one plus X zero times Y one plus Y zero. This may look completely unmotivating. Why do we care about this? The reason this is going to be useful is let's expand this out and see what we get. We have four terms. 10:00:39 We have X one Y one. You have the cross terms X one Y zero and X zero Y one and you have the last term X zero Y zero. The reason it makes sense to look at this is if we look back at the four products we need here those are exactly the products that are occurring in this expression. We have the four combinations up here. Those are all appearing in this product. The thing to notice is we don't actually need these two separately. 10:01:23 We only need the sum. We will add them up here. In other words, what we really need are only these three quantities. This one, this guy in parathese and this one. The observation is since we only need those three quantities we can get all of these using only three multiplications. If you compute this one using one multiplication and you compute this one using multiplication and compute Z using a third multiplication then you can get the sum by taking Z and subtracting 10:02:36 These two terms. If we do this then this term in parathese which we need here, (Writing on board). That is just Z minus X one Y one minus X zero Y zero. Just rearranging that equation. We can compute this using only three multiplications. With only three multiplications. Mainly to compute Z and these two terms. This one. This one and the product to compute Z. We can get all three terms we need for the algorithm. 10:03:17 Why do we need to compute Z? The idea is we can compute Z by doing one multiplication of this term with that term. We can compute each of these two by doing a further multiplication. Then the observation is in order to get this term normally this takes two more multiplications. We do (INDISCERNIBLE). If we computed Z we can get this sum by just doing these two subtractions. You need no further multiplications. 10:03:56 Doesn't computing Z require us to figure out what the central term is? No. We can compute it according to this expression here. We have to do one multiplication. This times that. That takes only one multiplication. We will not compute it using the second line. We compute it using this. Let me make this explicit. What are the three multiplications? One we are going to do is X one times Y one. Two is X zero times Y zero. 10:04:37 The third multiplication we are going to do is X one plus X zero times Y one plus Y zero which is Z. The claim is once we computed these three multiplications then we have all of the terms we need for the formula we had before. We have this term and this term which appeared explicitly and the last term that appeared was this sum of two terms. Which we can get by this. We take these three results here and plug them in here. We take Z minus this one and that one. 10:05:30 Here's the first question for you to think about before I write down, well, why don't I write down the pseudo code here. Someone is wondering why can't, how do we get Z without the original math? The original math we were going to do if we go back to the formula. Here's the the expression we had. We need, in this expression, all four of these terms. Which takes four multiplications. What I'm arguing here is let's still do these two multiplications. 10:06:15 Those will be these two here, but let's not two these two multiplications separately. Instead let's only do this one multiplication here (READING SCREEN). This term in parathese we can compute that with this formula by taking Z and subtracting these two. We only need to do three multiplications. These three together with further addition and subtraction but that's all linear. We only do three recursive calls to be linear. 10:07:20 Further questions on how this works? We are trying to find is X Y. We are trying to compute this expression X Y. We written out here what X Y is in terms of X one Y one and in terems of X zero Y zero. This trick shows away to compute this line here. The original production X Y in terms of only three multiplications. Let me write this is pseudo code and maybe it will make it clearer. This is the idea. (Writing on board). 10:08:10 This is like what we said but now using this trick. Let me write it on another piece of paper. We will see whether this gives us a speed up now. What is it going to look like? We have this algorithm. (Writing on board) it's a classic divide and conquer algorithm following what we have seen. The base case is if X and Y are small enough to fit in the registers then we can compute the product directly. It depends on the bit width of the machine. 10:08:57 It could be X and Y are less than two to the thirty two if you have a thirty two bit machine. If they fit in registers we can compute X times Y directly. We just return X times Y. That's a single operation. If they don't fit in registers then we will split X and Y like we described before. Split X in to X one and X zero such that X is X one times ten to the N over two plus X zero. Likewise, Y in to Y one and Y zero such that Y equals Y one. 10:09:47 Oops. That's Y. Ten to the N over two plus Y zero. Same splitting as before. Now we will compute this value Z. Z was this product. We will compute that recursively by multiplying X one plus X zero times Y one plus Y zero. That was the definition of Z up here. We will use this algorithm recursively to multiply these two integers together. Now we need the other two products here. We will also do those recursively. X one Y one we will compute that. 10:10:45 With a recursive call. Just multiplying X one with Y one. The other product we need is X zero Y zero. We will compute that with another recursive call. Now we have the three products we need. Now we will return the result we need. Using this equation we had before what is the result that we want? We need to return this first bit which is X one times Y one which we computed. Times ten to the N. Now we have the second term. 10:11:44 Plus. Now we use the trick. We compute the sum of these two terms as Z minus X one Y one minus X zero Y zero which we computed above here. Times ten to the N over two. Finally, the last term here. Okay. By the algebra we did before saying this term in parathese is equal to this term that shows what we get out here, this whole expression, will be equal to X times Y which is what we wanted. Before we analyze this algorithm, 10:12:27 Questions about what the pseudo code is doing? Here's the algorithm. The claim that I had for why this is useful is that now we are only doing these three multiplications instead of four. Why does that matter? I want you to think about this for a second. It may seem like reducing four multiplications to three is only going to improve the run time by a constant factor. We saw the previous algorithm that used four multiplications. 10:13:38 Took N squared time. If we reduce the four multiplications to three how will this give us an asymptoticly better run time? Think about this for a second. What is the affect of reducing the number of recursive multiplications from four to three? How can that give an asymptotic speed up rather than a factor speed up? Think about that and I will call on someone for a suggestion. Anyone want do volunteer an explanation for this? 10:14:31 Nobody wants to volunteer to say something in front of everyone. I'm getting some good ideas here in the chat. The basic idea here is that it's not a constant factor speed up because at every recursive level you are getting this speed up. The top node instead of splitting in to four things it's only splitting in to three recursive calls. Each of those splits in to only three recursive calls, et cetera. It's not that we are just getting recursive improvement at the top. 10:15:27 Every recursive level we are doing less work. The nodes in the tree is less. Not just a constant factor less. We can see if we work out what the recurrence is and use the master theorem. Since there's only three recursive calls, to answer my own question here, it's because we get the speed up at every recursive call and they add up. We get the recurrence. Recurrence is now the time needed for an N bit integer is going to be, there are three recursive calls here. 10:16:18 Three subproblems. Each of which has size. Again, N over two bits. These have N over two bits much the sums have N over two bits. We have three recursive calls of size N over two. Now how much nonrecursive work are we doing? Like before it's a linear amount of work. We have to do some additions and shifting by N zero's here or N over two zeros, but it's all linear. The recurrence is all this. By the master theorem, 10:17:06 The solution to this is theta if you plug it in. Since we only have three problems it works out N to the log base two of three. If you just look numerically at what that is it's approximately one point five. It's on the order of N to the one point five eight. This is faster than the elementary school algorithm. It's because the branching factor in the recursive tree has changed from four to three. You don't just get a constant factor speed up. 10:18:06 The speed up accumulates at every level. It changes the asymptotic growth rate of the run time. Here we went from N squared to N to the one point five eight. If N is large this could be a major difference. To summarize this is now finally faster than the naive algorithm. In fact, this is used in practice by arbitrary precision arithmetic libraries. Is the master theorem a good point of reference when trying to speed up a competition? 10:18:44 It's helpful when trying to solve a recurrence. If you are trying to speed up a divide an conquer algorithm it's good to know if there's away to reduce the recursive calls. Fewer recursive calls is better. If doing that causes the nonrecursive term to take longer it may not be worth it. If we were able to get the number of calls from three to two but this is an N squared term the solution is N squared. It's net slower. Trying to think of some way to reduce the number of subproblems 10:19:21 Is a good approach. A good question that's coming in here is, is this speed up actually worth it? Is this the fastest known way to do multiplication? It's good to keep in mind this asymptotic notation is obscuring lower order terms and constant factor. Here we are doing extra additions and subtractions we didn't have to do in the naive algorithm. It's possible for small N this may be slower than the naive algorithm. 10:20:07 That's not the case for numbers that have at least a few hundred bits. That's why this algorithm is used in practice because it does become faster for large numbers once you start to have a few hundred digits. Is this the fastest known algorithm for integer multiplication? No. We don't have time to go in to this. I will mention this here and I can post resources on piazza. These libraries I'm mentions, arbitrary precision libraries switch to an asymptomatic 10:21:03 Faster algorithm. (Writing on board). There's a more sophisticated algorithm based on the transform. I will say based on the fast Foreo transform. Which is asymptoticly faster. You can see N times log N time is better. The point at which that starts to dominate in practice is for large numbers. Like thousands of digits. If your working with that level of precision it does make sense to switch to this other algorithm. That's what people do. 10:21:41 The fast transform is a very important algorithm used in all kinds of applications. Compression, single filtering, astronomy if you are interested in that. This is an important other divide an conquer algorithm that we may come back do later in the class if we have time. Other wise it's worth reading about this because it has a huge number of applications in many different fields. For this you can see in the algorithm design book, 10:22:26 Five point six has a good description or CLRS has a chapter on this. I think it's chapter thirty. We may come back to this. In fact, the question is, is this the fastest known way of doing the integer multiplication? Actually, no. There's a very complicated algorithm from fifteen years ago that improves on this. Then there's a break through just last year or now it's twenty nineteen. Note, in twenty nineteen people finally found a N log N algorithm. 10:23:27 This algorithm is thought to be optimal. It's thought you can't do multiplication in linear time but no one proved that. This is complicated. The point that it becomes faster than the algorithm here is very large N. It's not useful in practice now. This is a theoretical algorithm for now. Due to the large hidden constant. This is another example of how an asymptotic growth rate doesn't tell you everything about the practice. It could be N log N. 10:24:06 But the constant factor is so huge is doesn't make sense to use this algorithm in practice. I will not talk about this. I will put links on piazza. One last question here and then we will not have time to get to the other problem I wanted to talk about that. I will mention that on Friday. There was a question about why do we have N over two in the recurrence. If we look at each of these three recursive calls what's the size of the input given to this? 10:24:57 These are N over two digit numbers. If X and Y had N digits then X one Y one X zero and Y zero have N over two digits. We run each of these on N over two digit numbers. The sum has N over two digits plus one more digit but asymptoticly it doesn't change anything. That's why the subproblems have size N over two here. Other questions. Does this algorithm have a name? Not really. It's breaking news. This is a big research discovery in twenty nineteen. 10:25:32 I don't know if it has a name other than the name of the authors. I will put a reference to this online if you would like to look it up. As I said now it's not practical. This other algorithm is practical. It does get used. I encourage you to read more about the transform in either of these sections. It's another example of a divide and conquer algorithm that's similar to this multiplication in that we are doing a bit of tricky algebra to reduce the number of recursive calls. 10:26:00 That we have to make. This has all kinds of applications in many different fields. Signal processing, compressing, astronomy. I can talk about this after class. That's all we have time for today. Just a reminder the home work one is now due tomorrow night. Please do turn that in. I will see you all on Friday.