Princeton Univerisy Lectures: Algorithms Part I – Analysis of Algorithms(4) – Order-of-Growth Classifications

1

by Kevin Wayne, Robert Sedgewick

Now, fortunately when we analyze algorithms, actually not too many different functions arise and actually that property allows us to really classify algorithms according to their performance as the problem size grows.

2

So that’s what we’ll talk about next. So the good news is there’s only these few functions turn up about the algorithms that we are interested in. We can craft things that have other functions and there are counter examples to this. But really a great number of the algorithms that we consider are described by these few functions and that are plotted here. And [cough] the when we are talking about the order of growth, we are not talking about the leading constant. Normally we’ll say the running time of the algorithm is proportional to N log N. That means we that we think that our hypothesis is that the running time is tilde C lg N, N lg N, where C is some constant. And in these plots, these are lg, lg plots that not really give a good idea of what’s going on. If a order of growth is logarithmic or constant, doesn’t matter how big the thing is. It’s going to be fast of the running time for is T for say a thousand, and for half a million it will be pretty close to T.

If it’s linear, if it’s auto growth is proportional to N then as the running time, as the size increases the running time increases correspondingly. And the same is true, almost, if it’s N log N. So those are the algorithms that we strive for. They scale with the input size. As the input grows, so grows the running time. And that’s, a reasonable situation to be in. As we talked about when we talked about Union-find. If it’s quadratic, the running time grows much faster than the input size. And it’s not feasible to use such an algorithm for large inputs. And qubic is even worse.

3

So what we find is for many algorithms our first task is really, simply, make sure it’s not quadratic or qubit. And these order of growth classifications actually come from kind of simple patterns in terms of the code that we write. So if our code has no loops in it, then the order of growth is going to be constant. If our code has some kind of loop where the input’s divided in half, and so binary search algorithm is an example of that. Then our order growth will be logarithmic and we’ll take a look at that analysis and but if you do the doubling test, it grows almost linearly, if you have a huge input and you double the size it’s, it’s still going to be I’m sorry, not linearly, constant just like if it’s constant. You’ll hardly notice that lg N. If you have a loop where you touch everything in your input. Than the running time is linear, proportional to end so a typical example of that would be find the maximum, or to count the number of zeros. Our one some problem. A very interesting category is a so-called N lg N algorithms or linear rhythmic algorithms. And those are the ones that arise from a particular algorithms design technique called the divide and conquer. And the Mergesort algorithm, which we’ll talk about in a couple of weeks, is a prime example of that. And then if you have double four loops like our two sum algorithm, that’s going to be time proportional to N^2. As we saw, that’s quadratic, or triple four loop like our 3-sum algorithm, that’s going to be cubic or time proportional to N^3.

For a quadratic algorithm or a cubic algorithm, the doubling factor is four or eight as the input size double for cubic algorithm, the running time goes up by a factor of eight, and that’s the kind of calculation that you can do in your head while waiting for a program to finish.

There’s also a category of algorithms who’s running time is exponential and in those algorithms n doesn’t get very large at and we’ll talk about those at the end part two of the course.

4

So these are some practical implications of, of the order growth. And we really dwell on this too much, except to come back to the point that the algorithms we are really interested in, that can solve huge problems, are the linear and N lg N algorithms. Because even now a quadr atic algorithm on a typical fast computer could only solve problems and saying that tens of thousands in a cubic algorithm only in the size of thousands. And nowadays those are just not useful because the amount of data that we have is more like the millions or billions or trillions. That fact is becoming more and more evident as time wears on the ancient times would have some discussion about whether quadratic algorithm might be useful but the situation gets worse as the time goes on, so we need better algorithms.

5

To illustrate the process of developing a mathematical model for describing a performance through an algorithm, we’ll look at a familiar algorithm called binary search. It’s, the goal is that you have a sorted array of integers, say and you’re given a key. And you want to know, is that key in the array? And if it is, what, what’s its index? And a fast algorithm for doing this is known as binary search, where we compare the key against the middle entry. In this case, if we’re looking for 33, we compare it against 53. If its smaller we know its in the left half of the array, if it’s larger we know it’s in the right half of the array, if it’s equal, we found it. And then we apply the same algorithm recursively.

6

So let’s quickly look at a demo. So we’re looking for 33 in this array, compare it against the middle entry in the array. 53 and it’s less so we go left, so now we can concentrate just on the left half of the array, now we look in the middle of this half, that’s 25, 33 is bigger so we go right.

7

And now we concentrate on the right half or the left half and we have a smaller sub array. Look at the middle, 33 is less so we go left and now we have only the one element to look at and we found our key 33 in the array and we return that index four.

8

If we’re looking for something that’s not in the array, we do the same process. So, say, we’re looking for 34. It’s going to be the same. Look in the left half, look in the right half. Look to the left of the 43. Now, there’s only one key to look at. And it’s not 34, so we say, it’s not there.

9

So that’s binary search. So here’s the code for binary search.

10

Actually, Binary Search although it’s a simple algorithm, its notoriously tricky to get every detail right. In fact one paper claimed, that the first bug free binary search wasn’t published until 1962, and even in 2006, a bug was found in Java’s implementation of binary search, just an indication of the care that we have to take in developing algorithms especially for libraries that are going to be used by millions of people. So here’s an implementation. It’s not recursive although often we can implement this recursively. And it’s just reflexing code, what I described in words, we have to find. A key, whether a key’s in an array. And we use two pointers, low and high, to, indicate the part of the array we are interested in, as long as low is less and equal to high, we compute the middle. And then we compare our key against the middle, actually its a three way compare, see its less or greater or if its equal, we, we return that mid index. If its less we reset the high pointer, if its greater, we reset the low pointer, and we keep on going until the pointers are equal. If they are equal and we haven’t found it then we return -one.

And it’s easy to persuade ourselves that this program works as advertised by thinking about this invariant, if the keys in the array, then it’s between low and high in the array.

11

Alright, so that’s a program that, you are probably familiar with. Lets look at the mathematical analysis of that program. And this a, a theorem that we are going to prove easily. We want to a lot of proofs but this is one worth doing. So its say that binary search uses at most one + lg base two event compares, to complete a search, in a sorted array of size f. So we do that, to setup the problem by defining, a variable T(N), which is the number of compares that binary search needed for its array size and. And then we write down a recurrence relation that is reflex the code. And what the code does is, it divides the problem size in half so that. If the event is less or equal to the event over two plus depending on how you count what the compare is think of it as a two way compare so divided in half by doing one compare and that’s true as long as N is bigger than one.

If it’s equal to one the solution is one. So it’s a recurrent relation describing the computation. And so we, we can go ahead and, solve this recurrence by applying the recurrence itself, to the first term on the right. Now that’s called telescoping. So if this is true and we can apply the same thing to T(N/2). And throw out another one and if that’s, this is true, apply the same thing to N over four, and throw out another one and so forth until we get down to just one. In which case we have lg N ones left. Now this is a true sketch you might have noticed that, that this proof actually only holds if N is a power of two.

Because we nearly specify in this recurrence what we mean if N is odd. But it’s possible to go ahead and sorry, possible to go ahead and take care of that detail as well and show that binary search running time is logarithmic always.

12

All right, so given that fact we can develop a faster algorithm for a threesome. It’s a sorting based algorithm. And so what we’re going to do is we’re going to take the numbers that we have as input and sort them. We’ll talk about sorting algorithms next week. And we get that time in time proportional to N lg N but that’s not the main part of the computation. The main part of the computation is to after the numbers are sorted, we’ll go through and for each pair of numbers ai and aj. We’ll do a binary search for -ai + ij. If we find it then we’ll have three numbers that sum to zero. So if we [cough] sort our numbers and then go through for each pair do a binary search to see if it’s there, so -40, zero. Minus that is 40, we do a binary search that’s in there so we have one solution to the 3-sum problem. And do that for all pairs of numbers.

13

Then a quick analysis says the order of growth of running time is going to be N^2 lg N. Then you need a good sort, well, you could use the elementary insertion sort the first one we talk about but the running time of the binary search for each of the pairs, each of the N^2 pairs or N^2/2 pairs we’re going to do the binary search, so we get a N^2 lg N running time. So, a quick example of how we could improve the performance, we could find an imroved algorithm to solve a problem. N^2 lg N is much less than N^3 for large N.

14

And so, we’re implicitly making the hypothesis that if we do this, do the sort base thing and use binary search, we’re going to have a faster program. And, sure enough we can go ahead and run some experiments and find that whereas it took us 50 seconds to solve the problem for 8,000 numbers before. It’s taking less than a second now. In 50 seconds we can solve up to 64,000. So typically we expect that better order of growth means. Faster in practice and but when it comes to examining the algorithms in detail we can, we can go ahead and do the tests and figure out which algorithm is faster. And certainly going from N^3 to N^2 lg N we’re going to expect that we’re going to have a much better algorithm.

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Analysis of Algorithms(3) – Mathematical Models

1

by Kevin Wayne, Robert Sedgewick

Observing what’s happening as we did in the last section it gives us a, a way to predict performance but it really doesn’t help us understand what the algorithm’s doing. So next, we’re going to look at mathematical model. A way to get a better concept of what’s really happening.

2

Again, this concept was really developed and popularized by Don Knuth starting in the late 60s. At that time, computer systems were really becoming complicated for the first time. And computer scientists were concerned about whether we really were going to be able to understand what’s going on. And Knuth was very direct in saying that this is something that we certainly can do. We can calculate the total running time of a program by identifying all the basic operations, figuring out the cost, figuring out the frequency of execution and summing up the cost times frequency for all the operations.

You have to analyze the program to determine what set of operations and the cost depends on the machine and the computer in the system is what we talked about before. The frequency leads us to mathematics because it depends on the algorithm and input data. Knuth has written a series of books that give very detailed and all exact analyses within a particular computer model for a wide range of algorithms. So, from Knuth, we know that in principle, we can get accurate mathematical models for the performance of algorithms or programs and operation.

3

All right. So what, what does this process look like? Well you can, if you want run experiments. In, in ancient times, we would actually look at the computer manual and every computer came with a manual that said precisely how long each instruction would take. But nowadays, it’s a little more complicated. So, we run experiments and, and you can go ahead and do a billion ads and figure out that maybe on your computer, an ad takes 2.1 nano seconds. Or you can do more complicated function s like computer sign or an arc tangent although that’s already getting close to the analysis of algorithms.

4

So, there’s some way to determine the costs of the basic operations. And so, we’ll just in most, most of the cases we’ll just postulate that it’s some constant and you can figure out what the constant is. Although when we’re working with a collection of objects, of anobjects there are some things that takes time proportional to N like if you’re going to allocate a array of size N it takes time proportional to N because in Java the default is that all the elements in the array initialize to zero.

In other operations it depends on the system implementation and an important one is string concatenation. If you concatenate two strings the running time is proportional to the length of the string. In many novices programming in Java, make a mistake of assuming that’s a constant time operation when its not.

6

Alright, so that’s the cost of each operation. More interesting is the frequency of operation, of execution of the operations. So this is a, a, it’s a very simple variant of the three sum problem. That’s the one sum problem. That’s how many numbers are actually equal to zero? How many single numbers add up to zero? So, that one, it’s just one four loop, and we go through, and we tested the number zero and increment or count. And by analyzing that code you can see that I and count have to be declared and then they have to be assigned to zero. There’s compares of i against N and there’s N + one of them. There’s compares of A(i) against zero, there’s N of those, N array axises and the number incremented is number of times there’s an increment is variable. I has incremented N times, but count could be incremented any number from zero to N times. And so that frequency is dependent on the input data. Or we might need a model for describing that or maybe there’s other operations that are more e xpensive and we won’t need to worry about that.

7

So, let’s look at the next more complicated problem is what about the frequency of execution of instructions in this program which is the two sum problem, how many pairs of integers sum to zero? Well, in this case, you have to do a little bit of math to see that when we when i goes from zero to N, and j goes from i + a to N the number of compares that we do work, plus array axises that we do is two for each time the if statement is executed for Ai and Aj and that time is, thing is executed N – one times the first time through the loop and N -two^2 and so forth. It’s the sum of the integers from zero up to N – one which is a simple discrete sum one-half N, (N – one) and since, and since we’re doing it twice the number of array axises is N, N – one. So, we can go ahead and get these actual exact counts. But already, it’s getting a little bit tedious to do that.

8

And as far back as Turing who also knew that and as well as Babbage did, that we want to have a measure of the amount of work involved in the process. He recognized that you didn’t want to necessarily go through and do it in full detail. It’s still helpful to have a crude estimate. So, you could count up the number of times that every operation is applied, give it weights and, and count the [inaudible] and so forth. But maybe we should just count the ones that are most expensive that’s what Turing said in 1947, and realistically that’s what we do nowadays.

9

So rather than going in and counting every little detail, we take some basic operation that’s maybe the most expensive and or and or the one that’s executed the most often. The one that cost and frequency is the highest and use that as a proxy for running time. Essentially, making the hypothesis that the running time is, is going to grow like a constant times [inaudible], So, in this case, were going to pick array axises.

10

So, that’s the first simplification. And the second simplification is that we’re going to ignore low order terms in the formulas that we derive. And there’s an easy way to do that. It’s called the tilde notation and, and the idea is when N is large in a formula like this the N^3 term is much, much higher than the N term or sixteen. In fact, so much so that we wouldn’t even hardly notice these low order terms. So, all of these formulas are tilde one-sixth N^3 and that’s a fine representative or approximate, approximation to these quantities. And it greatly simplifies their calculations to for a, through a way to lower, lower to terms like this. So, by focusing on one operation and , throwing away the tildes, the lower the terms and this is the technical definition of tilde.

11

It’s just, F(N) tilde G (N) means the limit as FN or GN equals one, and you can check that that’s going to hold in these kinds of situations. So, that greatly simplifies the frequency counts.

12

And if we’re only picking one thing we’re just talking about tilde N^2 and maybe another tilde N^2 for the increment for the two sum problems, okay. So again, when N is large, the terms are negligible and when N is really small, they’re not negligible but we don’t really care because we’re trying to estimate running times for large N and running times for small N are going to be small no matter what.

13

All right, so now, we’re using both the cost model and the tilde notation and then we can simply say, that this program uses tilde N^2 squared array axises and have implicit the hypothesis that we think the running time is going to be tilde, a constant, times N squared. Okay, we now what about three sums, let’s do a, a real problem. So now, we have the triple loop.

14

And then, we have to do a more complicated combinatorial problem in is not that big a deal really we are looking at the distinct number of ways you can chose three things out of N and that ‘s binomial coefficient. And again, doing the math and using the tilde, it’s just tilde one-sixth N^3 three ray axises for each triple so we can say one-half N^3. So we’re not computing and summing the costs of all operations that’s too much work. We’re picking the most expensive in terms of cost times frequency and approximating that and trying to get a good model for the running time.

15

So now most, we’re not going to do of a full discrete mathematics in this course but there’s some basic things that we’ll want to use and are, are not that difficult to understand. So, a lot of times we find out that we need to come up with an estimate of a discrete sum. Like we did for one + two up to N. Or some of the squares or other things like the three sum triple loop. And so actually if you’ve had basic calculus, one way to think of it as to just replace the sum with an interval, integral. That usually works or we can do the math and use the so-called Euler–Maclaurin summation formula to get a true approximation. But if you think of it this way you’ll believe us when we say that, that thing is tilde one-half N^2 or sum of one+ one-half + one-third up to one / N. That’s like integral from x = one to N1 / x and that’s natural log of N. Now even the three sum triple loop kind of if you’re used to multiple integrals, I will quickly give you the one-sixth N^3.

There’s many more and other techniques that we could use for this. And we’re not going to teach all that, but we’ll sometimes refer to results of this type.

16

Alright, so in principle, Knuth tells us that accurate mathematical models are available in practice, we can get really complicated formulas. We also might need some advance mathematics that the theoretician will revel in. But that maybe people learning algorithms for the first time might not be expected to know.

So in the end careful exact models are best, best left for exit, experts. There’s really a lot of things that can go on. On the other hand approximate models are definitely worthwhile. And for all the algorithms that we consider we’ll try to communicate a reasonable approximate model that can be used to describe the running time. Sometimes we’ll give the mathematical proofs and other times we’ll have to just cite the work of some expert.

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Analysis of Algorithms(2) – Observations

1

by Kevin Wayne, Robert Sedgewick 

Okay, so the first step is to be able to make some observations about the running time of the programs. And for analysis of algorithms that’s easier than in a lot of scientific disciplines, as we’ll see.

2

For a running example we’re going to use the so-called 3-sum problem. And it’s an easy to state problem. If you’ve got N distinct integers, how many triple sum to exactly zero? For example in this file 8ints.text. Text which has eight integers in it. There’s four triples that sum to zero. 30 – 40, ten. 30 – twenty – ten and so forth and so our goal is to write a program that can compute this quantity for any input file, any set of N integers. This is actually a, an extremely important computation that’s deeply related to many problems in computational geometry which is a branch of computer science that covers the algorithms and underlying science related to graphics and movies and geometric models of all sort. So this is a actually an important practical problem.

3

But it’s a simple one to write code for in a view you could write down this program without much effort. It’s a, got a static method count that is going to go ahead and take a integer array as an argument. And, is that, that’s a number of integers, that’s the length of the array. We will start with a variable count equals zero, and then a triple for loop, that checks each triple i j k, we go I from one and j from i+1 to n, and k from j+1 to n, so that we get each triple just once. And then if i+j, ai + aj + ak = zero, we increment the count. Alright. And after that triple four loop, we return the count. And then the main method, in this simple class just reads in, all the integers, and prints out the count. So that’s a brute force algorithm that is a fine method for solving the three sum problem, now what we’re interested in is how much time does this take as a function of’ n?

4

5

Well, one to time our program is to is just look at the watch. If you have a stopwatch, or look at the clock or your phone, or whatever you might need you can just go ahead and time it if you want or we have, Java has this part of it’s standard library, a stopwatch class that will go ahead and compute a lapse time. So, in order, anytime you run a program, if it is setup to easily take input of different sizes, a natural thing to do, is just run it for bigger sizes.

6

So for eight ints this program takes not too much time, for 1000 ints it takes half a second. For 2,000. Takes more time. That’s 3.7 seconds run it again, still takes 3.7 seconds for 4,000, so each time we’re doubling the size of the input and it’s definitely taking more time each time. And actually as we’ll see if programmers who get in the habit of testing or any time on their program in this way can get so that you can actually pretty easily and quickly evaluate when it’s going to finish. In fact. While you’re waiting for it to finish you can often figure it out. So that one took 30 seconds for 4K and definitely we could figure it out how long it’s going to take for 8K before it finishes, and you’ll see how in just a second. I’m not going to wait right now.You can think about what you think.

7

8

Okay so [cough] that’s empirical analysis, analysis. Run it for various input sizes and measure their running time. Now if this were some scientific problem where we were counting something that happen in the natural world. The number of ants in an ant hill or whatever then we’d have only a few data points and we would try to understand whats was going on by doing a plot of or running time with quite interested in on the Y axis and problem size with the X axis.

9

Hit a curve like this and actually whats science usually do because of some many problems fall into out of this class is do the plot as a lg, lg plot. If you do it as a lg, lg plot very often you’ll get a straight line. And the slope of the straight line is the key to what’s going on. In this case, the slope of the straight line is three and so you can run what’s called a regression to fit a late, the straight line through the data points. And then, it’s not difficult to show to do the math to show that if you get a straight line and the slope is B, then your function is proportional to A, N^B. That’s called the power law. And that’s true of many, many scientific problems including most algorithms. So here’s a little bit of the math for that. So the straight line means that since we did a lg, lg plot with powers of two, that lg(T(N) = B lg N + C. And we have our empirical values of B and C and then if you raise both sides of that equation to two to that power then you get T(N) = a constant times N^B.

So right away just from observation we have a pretty good model for the running time for our program, we can figure and do the math and figure out that it seems as though the running time is about ten^-10 N^3 seconds.

10

We can use that hypothesis to go ahead and make predictions. Just plug in for different values of N and it says it will take us 400 seconds for 16,000. 400 seconds is plenty of time but now we can go ahead and invest and run that experiment and sure enough we’re pretty close to that 408 seconds when we run it. And now we can make a prediction for 32,000 or for or for whatever else we might be interested in. The model helps us do predictions without investing the expense to run the experiments.

11

In fact, in this situation if there is a power law, and again in a very great majority of computer algorithm running times is going to be a power law. What we can do is just double the size of the input each time the way we were and take the ratio of the running times for N and 2N. And if you do that, that ratio going to converge to a constant. And in fact the log of the ratio is going to converge to that constant, which is the exponent of N and the running time. And you just need a little math to check that one, but that’s a very easy and natural way to go ahead and predict running times.

12

So that’s what I said before is, so we have this quick way to estimate B in the power law relationship. How do we estimate A? Well we can just run it and solve for A. So once we’ve decided that, that exponent is three let’s run it for some big N and we get pretty close model to the one we had from plotting things. So it’s almost identical hypothesis and we just got it by running the program double N each time.

13

Okay so there is a lot of effects in trying to understand the running time of a program on, on your machine. [cough] So. Key effects are independent of what computer it is. And that’s the algorithm you’re using and what’s the data. And that’s going to really determine the exponent in the power law. And then there’s a lot of, system dependent effects. What kind of hardware do you have? Do you have a fast computer or a slow one? What kind of software? What’s going on in your computer? All of those things really determine the constant A in the power law. So. In modern systems it is so much going on in the hardware and software, it’s sometimes difficult to get really precise measurements. But on the other hand we don’t have to sacrifice animals, or fly to another planet the way they do in other sciences, we can just run a huge number of experiments and usually take care of understanding these kind of effects.

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Analysis of Algorithms(1) – Introduction

1

by Kevin Wayne, Robert Sedgewick 

2

Welcome back. Today we’re going to do some math and some science. Not a lot, but we need to have a scientific basis for understanding the performance of our algorithms to properly deploy them in practise. So today we’re going to talk, about how to, observe performance characteristics of algorithms. We’re going to look at how to make mathematical models and how to classify algorithms according to the order of growth of their running time. We’ll talk a bit about the theory of algorithms and also how to analyze memory usage. So to put this all in perspective, we’re going to think about these issues from the point of view of different types of characters.

3

So the first one is the programmer who needs to solve a problem and get it working and get it deployed. Second one is the client who wants to use the whatever program did to get the job done. Third one is the theoretician, that’s somebody who really wants to understand what’s going on. And, and the last one is kind of a team, this basic blocking and tackling sometimes necessary to get, you know, all these things done. So, there’s a little bit of each one of these in today’s lecture.

4

And actually when you’re a student you have to think that you might be playing any or all of these roles some day.

6

So, it’s pretty important to understand the different points of view. So, the key that we’ll focus on is running time. And actually the idea of understanding the running time of a computation goes way back even to Babbage and probably before. And here’s a quote from Babbage, “As soon as an analytical engine exists, it will necessarily guide the future course of the science. Whenever any result is sought by its aid, the question will arise by what course of calculation can these results be arrived at by the machine in the shortest time”. If you look at Babbage’s machine called the analytic engine, it’s got a crank on it. And literally the concern that Babbage had in knowing how long a computation would take is, how m any times do we have to turn the crank. It’s, it’s not that different, in today’s world. The crank may be something electronic that’s happening a billion times a second. But still, we’re looking for, how many times does some discreet operation have to be performed in order to get a computation done.

7

So, there are lot of reasons to analyse algorithms. In the context of this course we are mainly interested in performance prediction. And we also want to compare the performance of different algorithms for the same task, and to be able to provide some guarantees on how well they perform. Along with this, is understanding some theoretical basis for how algorithms perform. But primarily, the practical reason that we want to be analyzing algorithms and understanding them is to avoid performance bugs. We want to have some confidence that our algorithms going to complete the job in the amount of time, that, that we think it will. And it’s very, very frequent to see, in today’s computational infrastructure, a situation where the client gets bad performance, because the programmer did not understand the performance characteristics of the algorithm. And today’s lecture is about trying to avoid that.

8

Now, we’re going to focus on performance and comparing algorithms in this course. There’s later courses in typical computer science curricula that have more information about the theoretical basis of algorithms and I’ll mention a little bit about that later on. But our focus is on being able to predict performance and comparing algorithms.

9

Now there’s a long list of success stories in designing algorithm with better performance in, in enabling the solution of problems that would otherwise not be solved. And I’ll just give a couple of examples. One of the first and most famous is the so called FFT algorithm. That’s an algorithm for breaking down the wave form of n samples of a signal into periodic components. And that’s at the basis for dvds and jpegs and, and many other applications. There’s an easy way to do it that takes time proportional to N^2. But the FFT algorithm, takes only N log N steps. And the difference between N log N and N^2 is, is the difference between being able to solve a large problem and not being able to solve it. A lot of the digital technology, digital media technology that we have today is enabled by that fast algorithm.

10

Another example was actually developed by Andrew Appel, who’s now the chair of computer science here at Princeton. And it was developed when he was an undergraduate for his senior thesis. It’s a fast algorithm for the N body simulation problem. The easy algorithm takes time proportional to N^2, but Appel’s algorithm was an N log N algorithm that again, meant that scientists can do N body simulation for huge values of N. And that enables new research.

11

S0, the challenge is that we usually face is, will my program be able to solve a large practical input? And, and actually, the working programmer is actually faced with that all the time. Why, why is my program running so slowly? Why does it run out of memory? And that’s faced programmers for a really long time and the insight to address this. Deuter Kanoof, in the 1970s, was that, we really can use the scientific method to understand the performance of algorithms in operation.

Maybe we’re not unlocking new secrets of the universe but, we can use the, scientific method, and treat the computer, as something to be studied in that way and come to an understanding of how our program are going to perform. And let’s take a look at that in more detail.

12

So this just a quick summary of what we mean by the scientific method, which has, been successful for a couple of centuries now. So, what we’re going to do is, observe from some feature of the natural world. In this case, it’s going to be the running time of our program on a computer. Then we’re going to develop hypothesis some model that’s consistent with the observations, and we’re going to hope that, that hypothesis is good enough that it’ll allow us to predict something.

Usually predict a running time for larger problem size, or on a different computer. And then we’ll verify the predictions by making more observations, and validate until we’re comfortable that our model hypothesis and observations all agree. That’s a way to get comfort that we understand the performance of our programs. Now, the within the scientific method, there’s some basic principles and the, the first is that if you’re going to run experiments, you should expect that somebody else should be able to run experiments and get the same result. And also the hypotheses have to have a specific property that the experiment can show the hypothesis to be wrong. So, it has to be carefully crafted, and we’ll be sure to try to do that. So, and again the future of the natural world that we’re studying is some particular computer that exists in the natural world. It changes the algorithm from an abstraction to a, some, some kind of actual physical thing happening like electrons racing around inside the computer.

Quiz:

14

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Union-Find(5) Union-Find Applications

46

by Kevin Wayne, Robert Sedgewick 

Alright.

Now that we’ve seen efficient implementations of algorithms that can solve the unifying problem for huge problem instances let’s look to see how that might be applied.

1

There’s a huge number of applications of Union-find. We talked about dynamic connectivity in networks there’s many other examples in our computational infrastructure. Down at the bottom is one of those important one is in image processing for understanding how to label areas in images. We’ll see later Kruskal’s minimum spanning tree algorithm, which is a graph processing algorithm which uses Union-find as a subroutine. There’s algorithms in physics for understanding physical phenomenon that we’ll look at an example and many others on this list.

2

So, the one we’re going to talk about now is called percolation. That’s a model for many physical systems I’ll give an abstract model and then just talk briefly about how it applies to physical systems. So let’s think of an n by n grid of squares that we call sites. And we’ll say that each site is open. That’s white in the diagram with probably P or blocked, that’s black of the diagram with probability one – P and we define a system to, we say that a system is percolated if the top and the bottom are connected by open sites. So the system at the left, you can find a way to get from the top to the bottom through white squares, but the system to the right does not percolate, there’s no way to get from the top to the bottom through white squares.

3

So, that’s a model for many systems. You can think of for electricity. You could think of a vacant site as being a conductor and, and a block site as being insulated. And so if there’s a conductor from top to bottom then the thing conducts electricity. Or, you could think of it as, as water flowing through a porous substance of some kind. Where a vacant side is just empty and a block side has got some material, and either the water flows through from top to bottom, or not. Or you could think of a social network where it’s people connected and either there’s a c onnection between two people or not and these are a way not to get from one group of people to another communicating through that social network. That’s just a few examples of the percolation model.

4

So if we, we are talking abouta randomized model where the sites are vacant with the given probability. And so it’s pretty clear that if it’s. Probability that a site is vacant is low as on the left, two examples on the left in this diagram, it’s not going to percolate. There’s not enough open site for there to be a connection from the top to the bottom. If the probability is high and there is a lot of open sides, it definitely is going to percolate. There would be lots of ways to get from the top to the bottom. But in the middle, when it’s medium, it’s questionable whether it percolates or not. So the scientific question, or the, mathematical question from this model is, how do we know, whether it’s going to percolate or not?

5

In this problem and in many similar problems, there’s what’s called a phase transition. Which says that, you know, when it’s low, it’s not going to percolate. When it’s high, it is going to percolate. And actually, the threshold between when it percolates and when it doesn’t percolate is very sharp. And actually there is a value as N gets large that if you’re less than that value it almost certainly will not percolate, if you’re greater it almost certainly will. The question is what is that value. This is an example of a mathematical model where the problem is, is very well articulated. What’s that threshold value but, nobody knows the solution to that mathematical problem. The only solution we have comes from a computational model, where we run simulations to try and determine the value of that probability. And those simulations are only enable by fast union find algorithms, that’s our motivating example for why we might need fast union find algorithms, so let’s look at that.


67

So what we’re going to run is called a so called Monte Carlo simulation. Where we initialize the whole grid to be block ed all black and then we randomly fill in open sites. And we keep going. And every time we add an open site, we check to see if it makes the system percolate. And we keep going until we get to a point where the system percolates. And we can show that the vacancy percentage at the time that it percolates is an estimate of this threshold value. So what we want to do is run this experiment millions of times, which we can do in a computer, as long as we can, efficiently do the calculation of does it percolate or not. That’s a Monte Carlo simulation, a computational problem that gives us a solution to this, scientifc problem where, mathematical problems nobody knows how to solve yet.

8

So, let’s, look in a little bit more detail of how we’re going to use our dynam-, dynamic connectivity model to do this.

9

So, it’s clear that, we’ll create an object corresponding to each site.

11

And we’ll give’em a name, from zero to N^2-1 as indicated here. And then we’ll connect them together. If they’re connected by open sites. So the percolation model on the left corresponds to the, connection model on the right, according to what we’ve been doing. Now, you might say, well, what we want to do is, connect, check whether any site in the bottom row is connected to any site in the top row, and use union find for that. Problem with that is, that would be a brute force algorithm. Would be quadratic, right on the face of it. Because it would have N^2, calls to find, to check whether they’re connected. For each site on the top, I’d check each site on the bottom. Much too slow.

12

Instead, what we do is create a virtual site on the top and on the bottom. And then, when we want to know whether this system percolates, we just check whether the virtual top site is connected to the virtual bottom site. So how do we model opening a new site?

1314

Well to open a site we just connect it to all it’s adjacent open sites. So that’s a few calls to Union but that’s easy to implement.

15

And then with that, simple, relationship we can use the exactly the code that we developed to go ahead and run a simulation for this connectivity problem. And that’s where we get the result that, by running enough simulations for a big-enough n, that this, percolation threshold is about .592746. With this fast algorithm we can get an accurate answer to the scientific question. If we use a slow Union-find algorithm we won’t be able to run it for very big problems and we won’t get a very accurate answer.

16

So in summary, we took an important problem. The, the dynamic connectivity problem. We modeled the problem to try to understand precisely what kinds of data structures and algorithms we’d need to solve it. We saw a few easy algorithms for solving the problem, and quickly saw that they were inadequate for addressing huge problems. But then we saw how to improve them to get efficient algorithms. And then left us with, applications that, could not be solved without these efficient algorithms. All of this involves the scientific method. For algorithm design where we try to develop mathematical models that help us understand the properties of the algorithms that we’re developing. And then we test those models through experimentation enabling us to improve algorithms iterating, developing better algorithms and more refined models until we get what we need to solve the practical problems that we have of interest. That’s going to be the overall architecture for studying algorithms that we’re going to use throughout the course.

Quiz:

17

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Union-Find(4) Quick Union Improvements

a

by Kevin Wayne, Robert Sedgewick 

Okay. So, we’ve looked at the quick union and quick find algorithms. Both of which are easy to implement. But simply can’t support a huge dynamic connectivity problems. So, how are we going to do better? That’s what we’ll look at next. A very effective improvement, it’s called weighting.

19

And it might have occurred to you while we are looking at these algorithms. The idea is to when implementing the quick union algorithm take steps to avoid having tall trees. If you’ve got a large tree and a small tree to combine together what you want to try to do is avoid putting the large tree lower, that’s going to lead to long tall trees. And there’s a relatively easy way to do that. What we’ll do is we’ll keep track of the number of objects in each tree and then, we’ll maintain balance by always making sure that we link the root of the smaller tree to the root of the larger tree. So, we, we avoid this first situation here where we put the larger tree lower. In the weighted algorithm, we always put the smaller tree lower. How we, let’s see how we implement that. Let’s see a demo first.

20

Okay, so again start out in our normal starting position, where everybody’s in their own tree. And for when there’s only two items to link it, it works, works the same way as before.

 

21

 

22

But now, when we have eight to merge with four and three, we put the eight as the child, no matter which order their arguments came, because it’s the smaller tree.

23

So, six and five doesn’t matter, whichever one goes down doesn’t matter.

24

Nine and four, so now, nine is the small one four is the big one. So, nine is going to be the one that goes down below.

2526

Two and one, five and zero. So now, five and zero five is in the bigger tree so zero goes below.

27

Seven and two, two is in the bigger tree so seven goes below.

2829

Six and one they’re in equal size trees. And seven and three, three is in the smaller tree so it goes below. So, the weighted algorithm always makes sure that the smaller tree goes below.

30

And again, we wind up with a single tree representing all the objects. But this time, we h ave some guarantee that no item is too far from the root and we’ll talk about that explicitly in a second.

31

So, here’s an example that shows the effect of doing the weighted quick union where we always put the smaller tree down below for the same set of union commands. This is with a hundred sites and 88 union operations. You can see in the top the big tree has some trees, some nodes, a fair distance from the root. In the bottom, for the weighted algorithm all the nodes are within distance four from the root. The average distance to the root is much, much lower.

32

Let’s look at the Java implementation and then we’ll look in more detail at, at that quantitative information. So, we used the same data structure except, now we need an extra array, that for each item, gives the number of objects in the tree routed at that item. That will maintain in the union operation. Find implementation is identical to for quick union, you’re just checking whether the roots are equal. For the union implementation, we’re going to modify the code to check the sizes. And link the root of the smaller tree to the root of the larger tree in each case. And then after changing the id link, we also change the size array. If we make id, i a child of j, then we have to increment the size of j’s tree by the size of i’s tree. Or if we do the other way around, then we have to increment the size of i’s tree by the size of j’s tree. So, that’s the full code in white for implementing quick union. So, not very much code but much, much better performance.

33

In fact we can analyze the running time mathematically and show that defined operation, it takes time proportional to how far down the trees are in the node in the tree, the nodes are in the tree, but we can show that it’s guaranteed that the depth of any node in the tree is at most the logarithm to the base two of N. We use the notation Lg always for logarithm to the base two. And, and, so for, if N is a thousand, that’s going to be ten, if N is a million that’s twenty, if N is a billion that’s 30. It’s a very small number compared to N. So, let’s look at the proof of that.

34

We do some mathematical proofs in, in this course when they’re critical such as this one. And why is it true that the depth of any node x is, at most, log base two of N? Well, the key to understanding that is to, take a look at exactly when does the depth of any node increase? When does it go down further in the tree? Well. The x’s depth will increase by one, when its tree, T1 in this diagram, is merged into some other tree, T2 in this diagram. Well, at that point we said we only do that if the size of T2 was bigger than the or equal to size of T1. So, when the depth of x increases, the size of its tree at least doubles. So, that’s the key because that means that the size of the tree containing x can double at most log N times because if you start with one and double log N times, you get N and there’s only N nodes in the tree. So, that’s a sketch of a proof that the depth of any node x is at most log base two of N. And that has profound impact on the performance of this algorithm.

35

Now instead of the initialization always takes time proportional to N. But now, both the union and the connected or find operation takes time proportional to log base two of N. And that is an algorithm that scales. If N grows from a million to a billion, that cost goes from twenty to 30, which is quite not acceptable. Now, this was very easy to implement and, and we could stop but usually, what happens in the design of algorithms is now that we understand what it is that gains performance, we take a look and see, well, could we improve it even further.

37

And in this case, it’s very easy to improve it much, much more. And that’s the idea of path compression. And this idea is that, well, when we’re trying to find the root of the tree containing a, a given node. We’re touching all the nodes on the path from that node to the root. While we’re doi ng that we might as well make each one of those just point to the root. There’s no reason not to.

38

So when we’re looking, we’re trying to find the root of, of P. After we find it, we might as well just go back and make every node on that path just point to the root.

3940

That’s going to be a constant extra cost. We went up the path once to find the root. Now, we’ll go up again to just flatten the tree out. And the reason would be, no reason not to do that. We had one line of code to flatten the tree, amazingly.

41

Actually to make a one liner code, we use a, a simple variant where we make every other node in the path point to its grandparent on the way up the tree. Now, that’s not quite as good as totally flattening actually in practice that it actually is just about as good. So, with one line of code, we can keep the trees almost completely flat.

42

Now, this algorithm people discovered rather early on after figuring out the weighting and it turns out to be fascinating to analyze quite beyond our scope. But we mentioned this example to illustrate how even a simple algorithmah, can have interesting and complex analysis. And what was proved by Hopcroft Ulman and Tarjan was that if you have N objects, any sequence of M union and find operations will touch the array at most a c (N + M lg star N) times. And now, lg N is kind of a funny function. It’s the number of times you have to take the log of N to get one. And the way to think, it’s called the iterated log function. And in the real world, it’s best to think of that as a number less than five because lg two^ 65536 is five.

So, that means that the running time of weighted quick union with path compression is going be linear in the real world and actually could be improved to even a more interesting function called the Ackermann function, which is even more slowly growing than lg.

43

And another point about this is it seems that this is so close to being linear that is time proportional to N instead of time proportional to N times the slowly growing function in N. Is there a simple algorithm that is linear? And people, looked for a long time for that, and actually it works out to be the case that we can prove that there is no such algorithm. So, there’s a lot of theory that goes behind the algorithms that we use.

And it’s important for us to know that theory and that will help us decide how to choose which algorithms we’re going to use in practice, and where to concentrate our effort in trying to find better algorithms. It’s amazing fact that was eventually proved by Friedman and Sachs, that there is no linear time algorithm for the union find problem. But weighted quick union with path compression in practice is, is close enough that it’s going to enable the solution of huge problems.

44

So, that’s our summary for algorithms for solving the dynamic connectivity problem. With using weighted quick union and with path compression, we can solve problems that could not otherwise be addressed. For example, if you have a billion operations and a billion objects I said before it might take thirty years. We can do it in six seconds. Now, and what’s most important to recognize about this is that its the algorithm design that enables the solution to the problem. A faster computer wouldn’t help much. You could spend millions on a super computer, and maybe you could get it done in six years instead of 30, or in two months but with a fast logarithm, you can do it in seconds, in seconds on your own PC.

Quiz:

45

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Union-Find(3) Quick Union

1

by Kevin Wayne, Robert Sedgewick 

All right so QuickFind is too slow for huge problems. So, how are we going to do better?

1

Our first attempt is an alternative called, Quick-union. This is so called lazy approach to algorithm design where we try to avoid doing work until we have to. It uses the same data structure or array ID with size M but now it has a different interpretation. We are going to think of that array as representing a set of trees that’s called a forest as depicted at right. So, each entry in the array is going to contain a reference to its parent in the tree. So, for example, 3’s parent is four, 4’s parent is nine. So 3’s entry is four and 4’s entry is nine in the array. Now each entry in the array has associated with it a root. That’s the root of its tree.

2

Elements that are all by themselves in just, in their own connected component, point to themselves, so one points to itself but also nine points to itself. It’s the root of the tree, containing two, four and three. So, from this data structure we can associate with each item a root, which is representative, say, of it’s connected component.

3

So that’s the root of three is nine, going up that root.

Quiz:

4

 

 

5

Now, once we can calculate these roots, then we can implement the find operation just by checking whether the two items that we’re supposed to check with are connective where they have the same root. That’s equivalent to saying, are they in the same connective component? So that’s some work, going to find the roots of each item but  the union operation is very easy. To merge components containing two different items. Two items that are in different components. All we do is set the ID of P’s route to the ID of Q’s route.  Let’s make P’s tree point to Q. So in this case, we would change the entry of nine to be six to merge three and five. The components containing three and five. And with just changing one value in the array we get the two large components emerged together. That’s the Quick-union algorithm. Because a union operation only involves changing one entry in the array.

6

Find operation requires a little more work. So let’s look at the Implementation, a demo of that one in operation first. So again we, we start out the same way but now the idea array entry really means that every one of these things is a little tree where the one node each everyone pointing to itself.

7

It’s the root of it’s own tree so now if we have to put four and three in the same component, then all we do is we take the root, of the component containing the first item and make that a child of the root of the component, component containing the second item.

8

In this case we just make four as parent three. So now three and eight. So again, we take the first item and make it a child of the root of the tree containing the second item. So now three, four, and eight are in the same component. Six and five six goes below five. Nine and four, So now four is the root of the tree containing four is eight.

9

And the root of tree containing nine is nine. And so we make nine a child of eight.

10

Two and one, that’s an easy one. Now if we get our eight and nine connected, we just checked that they have the same root and they both have the same root eight and so they’re connected.

11

Five and four 4’s root is eight. 5’s root is five. They’re different. They’re not connected.

12

Five and zero. Five goes to be a child of zero.

13

Seven and two seven goes to be a child of 2’s root which is one.

14

Six and one. 6’s route is zero 1’s its own route, so zero becomes a child of one. Each one of these union operations just involves changing one entry in the array.

15

 

And finally, seven and three. So seven’s root is one, three’s root is eight, one becomes a child of eight.

16

 

Okay and now we have one connected component with all the items together. Alright, so now let’s look at the code for implementing Quick-union. The constructor is the same as the other one. We create the array and then set each element to be it’s own root. Now we have a private method that implements this process of finding the root by chasing parent pointers until we get to the point where I is equal to ID of I, and if it’s not equal, we just move I up one level in the tree, set I equals ID of I and return it.

So starting at any node, you just follow ID equals ID of I until they’re equal and then you’re at a root and that’s a private method that we can use to implement the find operation or the connected operation. You just find the root of P and the root of Q and if you check if they’re equal. And then the union operation is simply find the two roots I and then set the idea the first one could be the second one. Actually less code than for Quick Find, no fore loops. There’s this one wild loop that we have to worry about a little bit. But that’s a quick and elegant implementation of code to solve the dynamic connectivity problem called Quick-union.

17

So now we’re going to have to look at can this code be effective for large problems? Well unfortunately Quick-union is faster but it’s also too slow. And it’s a little different kind of too slow then for Quick Find, there’s times when it could be fast, but there’s also times when it could be too slow. And the defect for Quick-union is that the trees can get too tall. Which would mean that the find operation would be too expensive. You could wind up with a long skinny tree. Of each object just pointing to next and then to do a find operation for object at the bottom would involve going all the way through the tree. Costing involving in the ray axises just to do the find operation and that’s going to be too slow if you have a lot of operations.

Quiz:

18

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Union-Find(2) Quick Find

a

by Kevin Wayne, Robert Sedgewick 

Now we’ll look at our first implementation of an algorithm for solving the dynamic connectivity problem, called Quick-find.

13

This is a so called eager algorithm, for solving kind activity problem. The data structure that we’re going to use to support the algorithm is simply an integer array indexed by object. The interpretation is the two objects, P and Q are connected if and only if, their entries in the array are the same.

So for example in this example with our ten objects the idea array that describes the situation after seven connections is illustrated in the middle of the slide. So that, after the, at this point zero, five, and six are all in the same connected component, because they have the same array entry, zero. One, two, and seven all have entry one. And three, four, eight, and nine all have entry eight.

14

So that representation is, shows that they’re connected. And clearly, that’s going to support a quick implementation of the find operation. We just check the array entries to see if they’re equal. Check if P and Q have the same ID. So, six and one have different IDs. One has ID one, six has ID zero. They’re not in the same connected component. Union is more difficult in order to merge the components, containing two given objects.

We have to change all the entries, whose ID is equal to one of them to the other one. And arbitrarily we choose to change the ones that are the same as P to the ones that are same as Q. So if we’re going to union six and one, then we have to change entries zero, five, and six. Everybody in the same connected component as six. From zero to one. And this is, as we’ll see, this is a bit of a problem when we have a huge number of objects, because there’s a lot of values that can change. But still, it’s easy to implement, so that’ll be our starting point.

15

So we’ll start with a, a demo of how this works. So, initially, we set up the ID array, with each entry, equal to its index. And so all that says is that all the objects are independent. They’re in their own connected component. Now, when we get a union operation. So, say, four is supposed to be unio n with three. Then we’re going to change, all entries, whose ID is equal to the first ID to the second one. So in this case, we’ll change the, connect three and four means that we need to change the four to a three.

16

And we’ll continue to do a few more so you’ll get an idea of how it works. So three and eight now so to connect three and eight now three and four have to be connected to eight.

17

So both of those entries have to change to eight.

18

Okay? So now, what about six and five?

So again, we change the first one to match the second one. So to connect six and five, we change the six to a five.

19

What about nine and four?

20

So, now we have to change the, to connect nine and four, we have to change, 9’s entry to be the same as 4’s. So now we have three, four, eight, and nine. All have entries eight.

21

They’re all on the same connected component.

22

Two and one means that we connect two and one by changing the 2201.

23

Eight and nine are already connected. They have the same, entries in the idea array. So, that connected query, that find says, true, they’re already connected.

24

And five and zero have different entries. They’re not connected, so we’d return false, in that case, not connected. And then, if we want to connect five and zero. Then, as usual we’ll connect, the entry corresponding to both five and six to zero.

25

Seven and two, union seven and two.

26

That’s an easy one. And union, six and one so there is three entries that have to get changed. All those zeros have to get changed to ones.

27

So, that’s a quick demo of Quick-find. Now next we’ll look at the code for implementating that.

28

Okay, with this concrete demo in mind then moving to coding up this algorithim is pretty straight forward. Although it’s an interesting programming exercise that a lot of us would get wrong the first time. So let’s start with the constructor, well we have a, a private integer array. That’s our ID array. That’s the data structure that’s going to support this implementation. The constructor has to create the array and then go through and set the value corresponding to each index I to I. That’s straight forward. The find operation, or connected operation. That’s the easy one .

29

This is the Quick-find algorithm. So it simply takes its two arguments, P and Q, and checks whether their ID entries are equal, and returns that value. If they’re equal, it returns true. If they’re not equal, it returns false. The more complicated operation implement is a union.

30

And there, we find first the ID corresponding with the first argument, and then the ID corresponding to the second argument. And then we go through the whole array, and looking for the entries whose IDs are equal to the ID of the first argument, and set those to the ID of the second argument.

That’s a pretty straightforward implementation. And I mentioned that a lot of us would get us wrong. The mistake we might make is to put ID of P here rather than first picking out, that value. And you can think about the implications of that. That’s an insidious bug.

31

So, that’s a fine implementation of QuickFind so the next thing to decide is how effective or efficient that algorithm is going to be and we’ll talk in some detail about how to do that but for this it’s sufficient to just think about the number of times the code has to access the array. As we saw when doing the implementation, both the initialized and union operations involved the for-loop that go through the entire array. So they have to touch in a constant proportional to n times after touching array entry. Find Operation is quick, it’s just to a constant number of times check array entries. And this is problematic because the union operation is too expensive.

In particular if you just have N union commands on N objects which is not unreasonable. They’re either connected or not then that will take quadratic time in squared time. And one of the themes that we’ll go through over and over in this course is that quadratic time is much to slow. And we can’t accept quadratic time algorithms for large problems.

The reason is they don’t scale. As computers get faster and bigger, quadratic algorithms actually get slower. Now, let’s just talk roughly about what I mean by that. A very rough standard, say for now, is that people have computers that can run billions of operations per second, and they have billions of entries in main memory. So, that means that you could touch everything in the main memory in about a second. That’s kind of an amazing fact that this rough standard is really held for 50 or 60 years. The computers get bigger but they get faster so to touch everything in the memory is going to take a few seconds.

Now it’s true when computers only have a few thousand words of memory and it’s true now that they have billions or more.

So let’s accept that as what computers are like. Now, that means is that, with that huge memory, we can address huge problems. So we could have, billions of objects, and hope to do billions of union commands on them. And, but the problem with that quick find algorithm is that, that would take ten^18th operations, or, say array axises or touching memory. And if you do the math, that works out to 30 some years of computer time.

32

Obviously, not practical to address such a problem on today’s computer. And, and the reason is, and the problem is that quadratic algorithms don’t scale with technology. You might have a new computer that’s ten times as fast but you could address a problem that’s ten times as big. And with a quadratic algorithm when you do that. It’s going to be ten times as slow. That’s the kind of situation we’re going to try to avoid by developing more efficient algorithms for solving problems like this.

Quiz:

33

Posted in Algorithm Tagged with:

Princeton Univerisy Lectures: Algorithms Part I – Union-Find(1) Dynamic Connectivity

algorithm

by Kevin Wayne, Robert Sedgewick

Welcome back to algorithms. Today, we’re going to talk about the union find problem. A set of algorithms for solving the so-called dynamic connectivity problem. We’ll look at two classic algorithms. Quick Find and Quick Union, and some applications and improvements of those algorithms. The subtext of today’s lecture really is to go through the steps that we’ll follow over and over again to develop a useful algorithm. The first step is to model the problem. Try to understand, basically, what are the main elements of the problem that need to be solved. Then we’ll find some algorithm to solve the problem.

1

In many cases, the first algorithm we come up with would be fast enough and maybe it fits in memory and, we’ll go ahead and use it, and be off and running. But in many other cases maybe it’s not fast enough, or there’s not enough memory. So, what we do is try to figure out why, find a way to address whatever’s causing that problem, find a new algorithm and iterate until we’re satisfied.

This is the scientific approach to designing and analyzing algorithms, where we build mathematical models to try and understand what’s going on, and then we do experiments to validate those models and help us improve things. So, first we’ll talk about the dynamic connectivity problem, the model of the problem for union find. So, here’s the idea. They’re going to have a set of N objects. Doesn’t really matter what they are. We’re going to use the numbers, zero through N to model our objects. And then, we have the idea of a connection between two objects. And, we’ll, postulate that there’s going to be a command that says, connect two objects. Given two objects, provide a connection between them. And then key part of the problem is find query or the connected query, which just asks, is there a path connecting the two objects.

2

So for example, in this set of ten objects, we performed already, a bunch of union commands, connecting four and three, three and eight, six and five, nine and four, two and one. And now we might have a connected query that says, is zero connected to seven? Well, in this case, there is no connection, so we say no. But if we ask is eight connected to nine? We are going to say yes, even no we don’t have a direct connection between eight and nine. There is a path from eight to three to four to nine. So, that’s our problem, to be able to officially support these two commands for given set of objects.

3

Now, let’s say we add a union five, zero. So, that creates a connection between five and zero.  Seven and two creates a connection between seven and two. And six and one,  between six and one. So, now if we ask our zero connected to seven, well one and zero we can do that too. And that’s a redundant  connection. And now, if we ask is zero connected to seven we’re going to answer yes. So that’s our problem, intermix union, commands and connected queries and we need to be able to officially support those commands for a large number of objects. So, here’s a much bigger example.

4

And you can see that we’re going to need efficient algorithms for this. First of all, you can see we’re going to need a computer for this. It would take quite, quite some time for a human to figure out whether there’s a connection. In this case there is a connection. Now, the algorithms that we’re looking at today are not going to actually give the path connecting the two objects. It’s just going to be able to answer the question, is there a path? In part two of the course, we’ll consider algorithms that explicitly find paths. They’re not as efficient as union find because they have more work to do.

5

 

Now, applications of these, these algorithms involve objects of all types. These are used for digital photos, where the objects are pixels they’re used for networks, where the objects are computers, social networks, where it’s people, or computer chips, where it’s circuit elements or abstract things like variable names in a program, or elements in a mathematical set, or physical things like metallic sites in a composite system. So, all different types of objects for, but for programming we’re going to associate each object with a name and we’ll just name the objects with a number, integers from zero to N-1.

That’s a very convenient initial starting point for our programs because we can use integers as an index into an array then, and then quickly access information relevant to each object. And it also just supresses a lot of details that are not relevant to union find. In fact, to make this mapping from an object name to the integer zero through N – one is to find application of a symbol table or a searching algorithm, which is one of the things that we’ll be studying later in this course algorithms and data structures for solving that problem.

Now, the connections, well, we need, a few abstract properties that these connections have to satisfy. And they’re all quite natural and intuitive. So we assume that is connected to is an equivalence elation. That is, every object’s connected to itself, it’s symmetric. If P’s connected to Q, then Q’s connected to P, and it’s transitive. If P’s connected to Q, and Q’s connected to R, then P’s connected to R.

6

Now these properties are very intuitive. But it’s worthwhile to state them explicitly and make sure that our algorithms maintain them. When we have an equivalence relation a set of objects and connections divide into subsets called connected components. A connected component is a maximal set of objects that’s mutually connected. For example in this small example here, here’s three connected components. One consisting of just object zero, second one objects one, four and five. And third one the other four objects. And these components have the property that if any two objects in them are connected and there is no object outside that is connected to those objects, that’s connected components. Our algorithms will gain efficiency by maintaining connected components and using that knowledge to efficiently answer the query that’s, that they’re presented with.

7

Okay, so to implement the operations, we have to find query and the union command. And so we’re going to maintain the connected components. The find is going to have to check if two objects are in the same component and the union command is going to have to replace components containing two objects with their union.

So, for example, if we have these components, and we get the command to union connect, two and five. Essentially, we need to merge the connected components containing the one containing two or the one containing five to get a big connected components and now we have only two connected components.

8
All of that leads up to, in a programming world to specifying, a data type which is simply specification of the methods that we are want to going to implement in order to solve this problem. So you know, typical Java model, what we will do is create a class called UF that contains two methods, one to implement union, the other one to implement connected, which returns a boolean. The constructor, takes SR unit, the number of objects, so that it can build data structure based on the number of objects. So, and we have to, bear in mind, as we’re building our logarithms, that both the number of objects can be huge, but also, the number of operations.

9

We can have a, a very large number, of  union and connected, operations and our algorithms are going to have to be efficient, under those conditions.

10

One of the practices that will follow often in this course is to check our API design before getting too far into dealing with the problem, by building a client that is going to use the data type that we develop. So, for this example, we’ve got a client that, Will read information from standard input.

First, an integer which is the number of objects that are going to be processed. And then a series of pairs of object names. And what the client does is it, it’ll, first it’ll read the integer from standard input, and create a, a UF object. And then as long as standard input is not empty, it’s going to read two integers from the input. And if they’re not connected, then it’ll connect them and print them out. If they are connected it’ll ignore. So, that’s our test client and that’s a fine test client to make sure that any implementation does what we expect that it will. So, that’s the setup. We’ve described the operations we want to implement all the way down to code and we have client code that we’re going to have to be able to service with our applications.

 

Quiz:

12

Posted in Algorithm Tagged with:

Create a jQuery plugin to make your web as a speaker

web-speech-api

I there anyway to make your web text content spoken?

There are a lot of APIs we can use, but it is not easy to choose a suitable one.  I tried with google translate speech api (e.g:http://translate.google.com/translate_tts?ie=UTF-8&q=what+are+you+doing&tl=en-us), but there is a limitation that the length of sentence can not be longer than 100 characters for one time call. Anyway we can split our large content into array of sentences and every item is one sentence which shorter than 100 characters.  Then loop the array to call google’s API, but another problem raises up, google will cancel our call because the frequency is too high.

At last, I found a better one http://www.yakitome.com, in order to use it’s API, we need to register as an account and get a free API Key, and there is very detailed document(https://www.yakitome.com/documentation/tts_api) for reference.

Below is the example that I created a simple demo:

See the Pen jQuery plugin web text content to speech by Wang Weiqiang (@WangWeiqiang) on CodePen.

Posted in Digital Multiple Media, Javascript Tagged with: ,