An intuitive overview of Taylor series Sunday, Nov 17 2013 

This is a note written for my fall 2013 Math 100 class, but it was not written “for the exam,” nor does anything on here subtly hint at anything on any exam. But I hope that this will be helpful for anyone who wants to get a basic understanding of Taylor series. What I want to do is try to get some sort of intuitive grasp on Taylor series as approximations of functions. By intuitive, I mean intuitive to those with a good grasp of functions, the basics of a first semester of calculus (derivatives, integrals, the mean value theorem, and the fundamental theorem of calculus) – so it’s a mathematical intuition. In this way, this post is a sort of follow-up of my earlier note, An Intuitive Introduction to Calculus.

PLEASE NOTE that this note was written and typeset for my (now) main site: davidlowryduda.com. You can find it here. Because of this, some of the math shown here will display better there, where I have more control. This will also serve as the announcement that davidlowryduda.com is now on an improved webhost and should be fast and fully operational. Now, back to the math.

We care about Taylor series because they allow us to approximate other functions in predictable ways. Sometimes, these approximations can be made to be very, very, very accurate without requiring too much computing power. You might have heard that computers/calculators routinely use Taylor series to calculate things like {e^x} (which is more or less often true). But up to this point in most students’ mathematical development, most mathematics has been clean and perfect; everything has been exact algorithms yielding exact answers for years and years. This is simply not the way of the world.

Here’s a fundamental fact to both mathematics and life: almost anything worth doing is probably pretty hard and pretty messy.

For a very recognizable example, let’s think about finding zeroes of polynomials. Finding roots of linear polynomials is very easy. If we see {5 + x = 0}, we see that {-5} is the zero. Similarly, finding roots of quadratic polynomials is very easy, and many of us have memorized the quadratic formula to this end. Thus {ax^2 + bx + c = 0} has solutions {x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}}. These are both nice, algorithmic, and exact. But I will guess that the vast majority of those who read this have never seen a “cubic polynomial formula” for finding roots of cubic polynomials (although it does exist, it is horrendously messy – look up Cardano’s formula). There is even an algorithmic way of finding the roots of quartic polynomials. But here’s something amazing: there is no general method for finding the exact roots of 5th degree polynomials (or higher degree).

I don’t mean We haven’t found it yet, but there may be one, or even You’ll have to use one of these myriad ways – I mean it has been shown that there is no general method of finding exact roots of degree 5 or higher polynomials. But we certainly can approximate them arbitrarily well. So even something as simple as finding roots of polynomials, which we’ve been doing since we were in middle school, gets incredibly and unbelievably complicated.

So before we hop into Taylor series directly, I want to get into the mindset of approximating functions with other functions.

1. Approximating functions with other functions

We like working with polynomials because they’re so easy to calculate and manipulate. So sometimes we try to approximate complicated functions with polynomials, a problem sometimes called “polynomial interpolation”.

Suppose we wanted to approximate {\sin(x)}. The most naive approximation that we might do is see that {\sin(0) = 0}, so we might approximate {\sin(x)} by {p_0(x) = 0}. We know that it’s right at least once, and since {\sin(x)} is periodic, it’s going to be right many times. I write {p_0} to indicate that this is a degree {0} polynomial, that is, a constant polynomial. Clearly though, this is a terrible approximation, and we can do better.

(more…)

Math 100: Before second midterm Thursday, Nov 7 2013 

You have a midterm next week, and it’s not going to be a cakewalk.

As requested, I’m uploading the last five weeks’ worth of worksheets, with (my) solutions. A comment on the solutions: not everything is presented in full detail, but most things are presented with most detail (except for the occasional one that is far far beyond what we actually expect you to be able to do). If you have any questions about anything, let me know. Even better, ask it here – maybe others have the same questions too.

Without further ado -

And since we were unable to go over the quiz in my afternoon recitation today, I’m attaching a worked solution to the quiz as well.

Again, let me know if you have any questions. I will still have my office hours on Tuesday from 2:30-4:30pm in my office (I’m aware that this happens to be immediately before the exam – status not by design). And I’ll be more or less responsive by email.

Study study study!

Beginning to migrate Wednesday, Oct 23 2013 

I haven’t updated this in a bit, but I have a good reason: I am in the midst of migrating away from wordpress.com to davidlowryduda.com.

This is the first time I’ve dealt with backend-ish things, so it took me a bit to get used to the lay of the land. Although I have the domain davidlowryduda (go figure), I am currently using a free webhost (I’m going to assume that I’m not going to suddenly out-traffic it or anything). So it’s a bit slow compared to wordpress.com, but there will be no more ads! (yay!) And I have complete control over the layout, and I can have a better site.

Further, I’m experimenting with django, and I haven’t yet decided which one I prefer or how I might want to integrate them together.

This is all to say that davidlowryduda.com is in a state of flux, but I’ll be maintaining both this and that for a while – until I get fully set up.

Math 100: Week 4 Saturday, Sep 28 2013 

This is a post for my math 100 calculus class of fall 2013. In this post, I give the 4th week’s recitation worksheet (no solutions yet – I’m still writing them up). More pertinently, we will also go over the most recent quiz and common mistakes. Trig substitution, it turns out, is not so easy.

Before we hop into the details, I’d like to encourage you all to avail of each other, your professor, your ta, and the MRC in preparation for the first midterm (next week!).

1. The quiz

There were two versions of the quiz this week, but they were very similar. Both asked about a particular trig substitution

\displaystyle \int_3^6 \sqrt{36 - x^2} \mathrm{d} x

And the other was

\displaystyle \int_{2\sqrt 2}^4 \sqrt{16 - x^2} \mathrm{d}x.

They are very similar, so I’m only going to go over one of them. I’ll go over the first one. We know we are to use trig substitution. I see two ways to proceed: either draw a reference triangle (which I recommend), or think through the Pythagorean trig identities until you find the one that works here (which I don’t recommend).

We see a {\sqrt{36 - x^2}}, and this is hard to deal with. Let’s draw a right triangle that has {\sqrt{36 - x^2}} as a side. I’ve drawn one below. (Not fancy, but I need a better light).

In this picture, note that {\sin \theta = \frac{x}{6}}, or that {x = 6 \sin \theta}, and that {\sqrt{36 - x^2} = 6 \cos \theta}. If we substitute {x = 6 \sin \theta} in our integral, this means that we can replace our {\sqrt{36 - x^2}} with {6 \cos \theta}. But this is a substitution, so we need to think about {\mathrm{d} x} too. Here, {x = 6 \sin \theta} means that {\mathrm{d}x = 6 \cos \theta}.

Some people used the wrong trig substitution, meaning they used {x = \tan \theta} or {x = \sec \theta}, and got stuck. It’s okay to get stuck, but if you notice that something isn’t working, it’s better to try something else than to stare at the paper for 10 minutes. Other people use {x = 6 \cos \theta}, which is perfectly doable and parallel to what I write below.

Another common error was people forgetting about the {\mathrm{d}x} term entirely. But it’s important!.

Substituting these into our integral gives

\displaystyle \int_{?}^{??} 36 \cos^2 (\theta) \mathrm{d}\theta,

where I have included question marks for the limits because, as after most substitutions, they are different. You have a choice: you might go on and put everything back in terms of {x} before you give your numerical answer; or you might find the new limits now.

It’s not correct to continue writing down the old limits. The variable has changed, and we really don’t want {\theta} to go from {3} to {6}.

If you were to find the new limits, then you need to consider: if {x=3} and {\frac{x}{6} = \sin \theta}, then we want a {\theta} such that {\sin \theta = \frac{3}{6}= \frac{1}{2}}, so we might use {\theta = \pi/6}. Similarly, when {x = 6}, we want {\theta} such that {\sin \theta = 1}, like {\theta = \pi/2}. Note that these were two arcsine calculations, which we would have to do even if we waited until after we put everything back in terms of {x} to evaluate.

Some people left their answers in terms of these arcsines. As far as mistakes go, this isn’t a very serious one. But this is the sort of simplification that is expected of you on exams, quizzes, and homeworks. In particular, if something can be written in a much simpler way through the unit circle, then you should do it if you have the time.

So we could rewrite our integral as

\displaystyle \int_{\pi/6}^{\pi/2} 36 \cos^2 (\theta) \mathrm{d}\theta.

How do we integrate {\cos^2 \theta}? We need to make use of the identity {\cos^2 \theta = \dfrac{1 + \cos 2\theta}{2}}. You should know this identity for this midterm. Now we have

\displaystyle 36 \int_{\pi/6}^{\pi/2}\left(\frac{1}{2} + \frac{\cos 2 \theta}{2}\right) \mathrm{d}\theta = 18 \int_{\pi/6}^{\pi/2}\mathrm{d}\theta + 18 \int_{\pi/6}^{\pi/2}\cos 2\theta \mathrm{d}\theta.

The first integral is extremely simple and yields {6\pi} The second integral has antiderivative {\dfrac{\sin 2 \theta}{2}} (Don’t forget the {2} on bottom!), and we have to evaluate {\big[9 \sin 2 \theta \big]_{\pi/6}^{\pi/2}}, which gives {-\dfrac{9 \sqrt 3}{2}}. You should know the unit circle sufficiently well to evaluate this for your midterm.

And so the final answer is {6 \pi - \dfrac{9 \sqrt 2}{2} \approx 11.0553}. (You don’t need to be able to do that approximation).

Let’s go back a moment and suppose you didn’t re\”{e}valuate the limits once you substituted in {\theta}. Then, following the same steps as above, you’d be left with

\displaystyle 18 \int_{?}^{??}\mathrm{d}\theta + 18 \int_{?}^{??}\cos 2\theta \mathrm{d}\theta = \left[ 18 \theta \right]_?^{??} + \left[ 9 \sin 2 \theta \right]_?^{??}.

Since {\frac{x}{6} = \sin \theta}, we know that {\theta = \arcsin (x/6)}. This is how we evaluate the left integral, and we are left with {[18 \arcsin(x/6)]_3^6}. This means we need to know the arcsine of {1} and {\frac 12}. These are exactly the same two arcsine computations that I referenced above! Following them again, we get {6\pi} as the answer.

We could do the same for the second part, since {\sin ( 2 \arcsin (x/6))} when {x = 3} is {\sin (2 \arcsin \frac{1}{2} ) = \sin (2 \cdot \frac{\pi}{6} ) = \frac{\sqrt 3}{2}}; and when {x = 6} we get {\sin (2 \arcsin 1) = \sin (2 \cdot \frac{\pi}{2}) = \sin (\pi) = 0}.

Putting these together, we see that the answer is again {6\pi - \frac{9\sqrt 3}{2}}.

Or, throwing yet another option out there, we could do something else (a little bit wittier, maybe?). We have this {\sin 2\theta} term to deal with. You might recall that {\sin 2 \theta = 2 \sin \theta \cos \theta}, the so-called double-angle identity.

Then {9 \sin 2\theta = 18 \sin \theta \cos \theta}. Going back to our reference triangle, we know that {\cos \theta = \dfrac{\sqrt{36 - x^2}}{6}} and that {\sin \theta = \dfrac{x}{6}}. Putting these together,

\displaystyle 9 \sin 2 \theta = \dfrac{ x\sqrt{36 - x^2} }{2}.

When {x=6}, this is {0}. When {x = 3}, we have {\dfrac{ 3\sqrt {27}}{2} = \dfrac{9\sqrt 3}{2}}.

And fortunately, we get the same answer again at the end of the day. (phew).

2. The worksheet

Finally, here is the worksheet for the day. I’m working on their solutions, and I’ll have that up by late this evening (sorry for the delay).

Ending tidbits – when I was last a TA, I tried to see what were the good predictors of final grade. Some things weren’t very surprising – there is a large correlation between exam scores and final grade. Some things were a bit surprising – low homework scores correlated well with low final grade, but high homework scores didn’t really have a strong correlation with final grade at all; attendance also correlated weakly. But one thing that really stuck with me was the first midterm grade vs final grade in class: it was really strong. For a bit more on that, I refer you to my final post from my Math 90 posts.

Math 100: Week 3 and pre-midterm Tuesday, Sep 24 2013 

This is a post for my Math 100 class of fall 2013. In this post, I give the first three weeks’ worksheets from recitation and the set of solutions to week three’s worksheet, as well as a few administrative details.

Firstly, here is the recitation work from the first three weeks:

  1. (there was no recitation the first week)
  2. A worksheet focusing on review.
  3. A worksheet focusing on integration by parts and u-substitution, with solutions.

In addition, I’d like to remind you that I have office hours from 2-4pm (right now) in Kassar 018. I’ve had multiple people set up appointments with me outside of these hours, which I’m tempted to interpret as suggesting that I change when my office hours are. If you have a preference, let me know, and I’ll try to incorporate it.

Finally, there will be an exam next Tuesday. I’ve been getting a lot of emails about what material will be on the exam. The answer is that everything you have learned up to now and by the end of this week is fair game for exam material. This also means there could be exam questions on material that we have not discussed in recitation. So be prepared. However, I will be setting aside a much larger portion of recitation this Thursday for questions than normal. So come prepared with your questions.

Best of luck, and I’ll see you in class on Thursday.

Happy Birthday to The Science Guy Tuesday, Sep 10 2013 

On 10 July 1917, Donald Herbert Kemske (later known as Donald Jeffry Herbert) was born in Waconia, Minnesota. Back when university educations were a bit more about education and a bit less about establishing vocation, Donald studied general science and English at La Crosse State Normal College (which is now the University of Wisconsin-La Crosse). But Donald liked drama, and he became an actor. When World War II broke out, Donald joined the US Air Force, flying over 50 missions as a bomber pilot.

After the war, Donald began to act in children’s programs at a radio station in Chicago. Perhaps it was because of his love of children’s education, perhaps it was the sudden visibility of the power of science, as evidenced by the nuclear bomb, or perhaps something else – but Donald had an idea for a tv show based around general science experiments. And so Watch Mr. Wizard was born on 3 March 1951 on NBC. (When I think about it, I’m surprised at how early this was in the life of television programming). Each week, a young boy or a girl would join Mr. Wizard (played by Donald) on a live tv show, where they would be shown interesting and easily-reproducible science experiments.

Watch Mr. Wizard was the first such tv program, and one might argue that its effects are still felt today. A total of 547 episodes of Watch Mr. Wizard aired. By 1956, over 5000 local Mr. Wizard science clubs had been started around the country; by 1965, when the show was cancelled by NBC, there were more than 50000. In fact, my parents have told me of Mr. Wizard and his fascinating programs. Such was the love and reach of Mr. Wizard that on the first Late Night Show with David Letterman, the guests were Bill Murray, Steve Fessler, and Mr. Wizard. He’s also mentioned in the song Walkin’ On the Sun by Smash Mouth. Were it possible for me to credit the many scientists that certainly owe their

I mention this because the legacy of Mr. Wizard was passed down. Don Herbert passed away on June 12, 2007. In an obituary published a few days later, Bill Nye writes that “Herbert’s techniques and performances helped create the United States’ first generation of homegrown rocket scientists just in time to respond to Sputnik. He sent us to the moon. He changed the world.” Reading the obituary, you cannot help but think that Bill Nye was also inspired to start his show by Mr. Wizard.

In fact, 20 years ago today, on 10 September 1993, the first episode of Bill Nye the Science Guy aired on PBS. It’s much more likely that readers of this blog have heard of Bill Nye; even though production of the show halted in 1998, PBS still airs reruns, and it’s commonly used in schools (did you know it won an incredible 19 Emmys?). I, for one, loved Bill Nye the Science Guy, and I still follow him to this day. I think it is impossible to narrow down the source of my initial interest in science, but I can certainly say that Bill Nye furthered my interest in science and experiments. He made science seem cool and powerful. To be clear, I know science is still cool and powerful, but I’m not so sure that’s the popular opinion. (As an aside: I also think math would really benefit from having our own Bill Nye).

(more…)

An intuitive introduction to calculus Saturday, Sep 7 2013 

This is a post written for my fall 2013 Math 100 class but largely intended for anyone with knowledge of what a function is and a desire to know what calculus is all about. Calculus is made out to be the pinnacle of the high school math curriculum, and correspondingly is thought to be very hard. But the difficulty is bloated, blown out of proportion. In fact, the ideas behind calculus are approachable and even intuitive if thought about in the right way.

Many people managed to stumble across the page before I’d finished all the graphics. I’m sorry, but they’re all done now! I was having trouble interpreting how WordPress was going to handle my gif files – it turns out that they automagically resize them if you don’t make them of the correct size, which makes them not display. It took me a bit to realize this. I’d like to mention that this actually started as a 90 minute talk I had with my wife over coffee, so perhaps an alternate title would be “Learning calculus in 2 hours over a cup of coffee.”

So read on if you would like to understand what calculus is, or if you’re looking for a refresher of the concepts from a first semester in calculus (like for Math 100 students at Brown), or if you’re looking for a bird’s eye view of AP Calc AB subject material.

1. An intuitive and semicomplete introduction to calculus

We will think of a function {f(\cdot)} as something that takes an input {x} and gives out another number, which we’ll denote by {f(x)}. We know functions like {f(x) = x^2 + 1}, which means that if I give in a number {x} then the function returns the number {f(x) = x^2 + 1}. So I put in {1}, I get {1^2 + 1 = 2}, i.e. {f(1) = 2}. Primary and secondary school overly conditions students to think of functions in terms of a formula or equation. The important thing to remember is that a function is really just something that gives an output when given an input, and if the same input is given later then the function spits the same output out. As an aside, I should mention that the most common problem I’ve seen in my teaching and tutoring is a fundamental misunderstanding of functions and their graphs

For a function that takes in and spits out numbers, we can associate a graph. A graph is a two-dimensional representation of our function, where by convention the input is put on the horizontal axis and the output is put on the vertical axis. Each axis is numbered, and in this way we can identify any point in the graph by its coordinates, i.e. its horizontal and vertical position. A graph of a function {f(x)} includes a point {(x,y)} if {y = f(x)}.

The graph of the function x^2 + 1 is in blue. The emphasized point appears on the graph because it is of the form (x, f(x)). In particular, this point is (1, 2).

Thus each point on the graph is really of the form {(x, f(x))}. A large portion of algebra I and II is devoted to being able to draw graphs for a variety of functions. And if you think about it, graphs contain a huge amount of information. Graphing {f(x)= x^2 + 1} involves drawing an upwards-facing parabola, which really represents an infinite number of points. That’s pretty intense, but it’s not what I want to focus on here.

1.1. Generalizing slope – introducing the derivative

You might recall the idea of the ‘slope’ of a line. A line has a constant ratio of how much the {y} value changes for a specific change in {x}, which we call the slope (people always seem to remember rise over run). In particular, if a line passes through the points {(x_1, y_1)} and {(x_2, y_2)}, then its slope will be the vertical change {y_2 - y_1} divided by the horizontal change {x_2 - x_1}, or {\dfrac{y_2 - y_1}{x_2 - x_1}}.

The graph of a line appears in blue. The two points (0,1) and (1,3) are shown on the line. The horizontal red line shows the horizontal change. The vertical red line shows the vertical change. The ‘slope’ of the blue line is the length of the vertical red line divided by the length of the horizontal red line.

So if the line is given by an equation {f(x) = \text{something}}, then the slope from two inputs {x_1} and {x_2} is {\dfrac{f(x_2) - f(x_1)}{x_2 - x_1}}. As an aside, for those that remember things like the ‘standard equation’ {y = mx + b} or ‘point-slope’ {(y - y_0) = m(x - x_0)} but who have never thought or been taught where these come from: the claim that lines are the curves of constant slope is saying that for any choice of {(x_1, y_1)} on the line, we expect {\dfrac{y_2 - y_1}{x_2 - x_1} = m} a constant, which I denote by {m} for no particularly good reason other than the fact that some textbook author long ago did such a thing. Since we’re allowing ourselves to choose any {(x_1, y_1)}, we might drop the subscripts – since they usually mean a constant – and rearrange our equation to give {y_2 - y = m(x_2 - x)}, which is what has been so unkindly drilled into students’ heads as the ‘point-slope form.’ This is why lines have a point-slope form, and a reason that it comes up so much is that it comes so naturally from the defining characteristic of a line, i.e. constant slope.

But one cannot speak of the ‘slope’ of a parabola.

The parabola f(x) = x^2 + 1 is shows in blue. Slope is a measure of how much the function f(x) changes when x is changed. Some tangent lines to the parabola are shown in red. The slope of each line seems like it should be the ‘slope’ of the parabola when the line touches the parabola, but these slopes are different.

Intuitively, we look at our parabola {x^2 + 1} and see that the ‘slope,’ or an estimate of how much the function {f(x)} changes with a change in {x}, seems to be changing depending on what {x} values we choose. (This should make sense – if it didn’t change, and had constant slope, then it would be a line). The first major goal of calculus is to come up with an idea of a ‘slope’ for non-linear functions. I should add that we already know a sort of ‘instantaneous rate of change’ of a nonlinear function. When we’re in a car and we’re driving somewhere, we’re usually speeding up or slowing down, and our pace isn’t usually linear. Yet our speedometer still manages to say how fast we’re going, which is an immediate rate of change. So if we had a function {p(t)} that gave us our position at a time {t}, then the slope would give us our velocity (change in position per change in time) at a moment. So without knowing it, we’re familiar with a generalized slope already. Now in our parabola, we don’t expect a constant slope, so we want to associate a ‘slope’ to each input {x}. In other words, we want to be able to understand how rapidly the function {f(x)} is changing at each {x}, analogous to how the slope {m} of a line {g(x) = mx + b} tells us that if we change our input by an amount {h} then our output value will change by {mh}.

How does calculus do that? The idea is to get closer and closer approximations. Suppose we want to find the ‘slope’ of our parabola at the point {x = 1}. Let’s get an approximate answer. The slope of the line coming from inputs {x = 1} and {x = 2} is a (poor) approximation. In particular, since we’re working with {f(x) = x^2 + 1}, we have that {f(2) = 5} and {f(1) = 2}, so that the ‘approximate slope’ from {x = 1} and {x = 2} is {\frac{5 - 2}{2 - 1} = 3}. But looking at the graph,

The parabola x^2 + 1 is shown in blue, and the line going through the points (1,2) and (2,5) is shown. The line immediately goes above and crosses the parabola, so it seems like this line is rising faster (changing faster) than the parabola. It’s too steep, and the slope is too high to reflect the ‘slope’ of the parabola at the indicated point.

we see that it feels like this slope is too large. So let’s get closer. Suppose we use inputs {x = 1} and {x = 1.5}. We get that the approximate slope is {\frac{3.25 - 2}{1.5 - 1} = 2.5}. If we were to graph it, this would also feel too large. So we can keep choosing smaller and smaller changes, like using {x = 1} and {x = 1.1}, or {x = 1} and {x = 1.01}, and so on. This next graphic contains these approximations, with chosen points getting closer and closer to {1}.

The parabola x^2 + 1 is shown in blue. Two points are chosen on the parabola and the line between them is drawn in red. As the points get closer to each other, the red line indicates the rate of growth of the parabola at the point (1,2) better and better. So the slope of the red lines seems to be getting closer to the ‘slope’ of the parabola at (1,2).

Let’s look a little closer at the values we’re getting for our slopes when we use {1} and {2, 1.5, 1.1, 1.01, 1.001} as our inputs. We get

\displaystyle \begin{array}{c|c} \text{second input} & \text{approx. slope} \\ \hline 2 & 3 \\ 1.5 & 2.5 \\ 1.1 & 2.1 \\ 1.01 & 2.01 \\ 1.001 & 2.001 \end{array}

It looks like the approximate slopes are approaching {2}. What if we plot the graph with a line of slope {2} going through the point {(1,2)}?

The parabola x^2 + 1 is shown in blue. The line in red has slope 2 and goes through the point (1,2). We got this line by continuing the successive approximations done above. It looks like it accurately indicates the ‘slope’ of the parabola at (1,2).

It looks great! Let’s zoom in a whole lot.

When we zoom in, the blue parabola looks almost like a line, and the red line looks almost like the parabola! This is why we are measuring the ‘slope’ of the parabola in this fashion – when we zoom in, it looks more and more like a line, and we are getting the slope of that line.

That looks really close! In fact, what I’ve been allowing as the natural feeling slope, or local rate of change, is really the line tangent to the graph of our function at the point {(1, f(1))}. In a calculus class, you’ll spend a bit of time making sense of what it means for the approximate slopes to ‘approach’ {2}. This is called a ‘limit,’ and the details are not important to us right now. The important thing is that this let us get an idea of a ‘slope’ at a point on a parabola. It’s not really a slope, because a parabola isn’t a line. So we’ve given it a different name – we call this ‘the derivative.’ So the derivative of {f(x) = x^2 + 1} at {x = 1} is {2}, i.e. right around {x = 1} we expect a rate of change of {2}, so that we expect {f(1 + h) - f(1) \approx 2h}. If you think about it, we’re saying that we can approximate {f(x) = x^2 + 1} near the point {(1, 2)} by the line shown in the graph above: this line passes through {(1,2)} and it’s slope is {2}, what we’re calling the slope of {f(x) = x^2 + 1} at {x = 1}.

Let’s generalize. We were able to speak of the derivative at one point, but how about other points? The rest of this post is below the ‘more’ tag below.

(more…)

Twenty Mathematicians, Two Hard Problems, One Week, IdeaLab2013 Friday, Aug 2 2013 

July has been an exciting and busy month for me. I taught number theory 3 hours a day, 5 days a week, for 3 weeks to (mostly) devoted and motivated high school students in the Summer@Brown program. In the middle, I moved to Massachusetts. Immediately after the Summer@Brown program ended, I was given the opportunity to return to ICERM to participate in an experimental program called an IdeaLab.

IdeaLab invited 20 early career mathematicians to come together for a week and to generate ideas on two very different problems: Tipping Points in Climate Systems and Efficient Fully Homomorphic Encryption. Although I plan on writing a bit more about each of these problems and the IdeaLab process in action (at least from my point of view), I should say something about what these are.

Models of Earth’s climate are used all the time, to give daily weather reports, to predict and warn about hurricanes, to attempt to understand the effects of anthropogenic sources of carbon on long-term climate. As we know from uncertainty about weather reports, these models aren’t perfect. In particular, they don’t currently predict sudden, abrupt changes called ‘Tippling points.’ But are tipping points possible? There have been warm periods following ice-ages in the past, so it seems that there might be tipping points that aren’t modelled in the system. Understanding these form the basis for the idea behind the Tipping Points in Climate Systems project. This project also forms another link in Mathematics of Planet Earth.

On the other hand, homomorphic encryption is a topic in modern cryptography. To encrypt a message is to make it hard or impossible for others to read it unless they have a ‘key.’ You might think that you wouldn’t want someone holding onto an encrypted data to be able to do anything with the data, and in most modern encryption algorithms this is the case. But what if we were able to give Google an encrypted dataset and ask them to perform a search on it? Is it possible to have a secure encryption that would allow Google to do some sort of search algorithm and give us the results, but without Google ever understanding the data itself? It may seem far-fetched, but this is exactly the idea behind the Efficient Fully Homomorphic Encryption group. Surprisingly enough, it is possible. But known methods are obnoxiously slow and infeasible. This is why the group was after ‘efficient’ encryption.

So 20 early career mathematicians from all sorts of areas of mathematics gathered to think about these two questions. For the rest of this post, I’d like to talk about the structure and my thoughts on the IdeaLab process. In later posts, I’ll talk about each of the two major topics and what sorts of ideas came out of the process.

(more…)

Chinese Remainder Theorem (SummerNT) Tuesday, Jul 9 2013 

This post picks up from the previous post on Summer@Brown number theory from 2013.

Now that we’d established ideas about solving the modular equation ax \equiv c \mod m, solving the linear diophantine equation ax + by = c, and about general modular arithmetic, we began to explore systems of modular equations. That is, we began to look at equations like

Suppose x satisfies the following three modular equations (rather, the following system of linear congruences):

x \equiv 1 \mod 5

x \equiv 2 \mod 7

x \equiv 3 \mod 9

Can we find out what x is? This is a clear parallel to solving systems of linear equations, as is usually done in algebra I or II in secondary school. A common way to solve systems of linear equations is to solve for a variable and substitute it into the next equation. We can do something similar here.

From the first equation, we know that x = 1 + 5a for some a. Substituting this into the second equation, we get that 1 + 5a \equiv 2 \mod 7, or that 5a \equiv 1 \mod 7. So a will be the modular inverse of 5 \mod 7. A quick calculation (or a slightly less quick Euclidean algorithm in the general case) shows that the inverse is 3. Multiplying both sides by 3 yields a \equiv 3 \mod 7, or rather that a = 3 + 7b for some b. Back substituting, we see that this means that x = 1+5a = 1+5(3+7b), or that x = 16 + 35b.

Now we repeat this work, using the third equation. 16 + 35b \equiv 3 \mod 9, so that 8b \equiv 5 \mod 9. Another quick calculation (or Euclidean algorithm) shows that this means b \equiv 4 \mod 9, or rather b = 4 + 9c for some c. Putting this back into x yields the final answer:

x = 16 + 35(4 + 9c) = 156 + 315c

x \equiv 156 \mod 315

And if you go back and check, you can see that this works. \diamondsuit

There is another, very slick, method as well. This was a clever solution mentioned in class. The idea is to construct a solution directly. The way we’re going to do this is to set up a sum, where each part only contributes to one of the three modular equations. In particular, note that if we take something like 7 \cdot 9 \cdot [7\cdot9]_5^{-1}, where this inverse means the modular inverse with respect to 5, then this vanishes mod 7 and mod 9, but gives 1 \mod 5. Similarly 2\cdot 5 \cdot 9 \cdot [5\cdot9]_7^{-1} vanishes mod 5 and mod 9 but leaves the right remainder mod 2, and 5 \cdot 7 \cdot [5\cdot 7]_9^{-1} vanishes mod 5 and mod 7, but leaves the right remainder mod 9.

Summing them together yields a solution (Do you see why?). The really nice thing about this algorithm to get the solution is that is parallelizes really well, meaning that you can give different computers separate problems, and then combine the things together to get the final answer. This is going to come up again later in this post.

These are two solutions that follow along the idea of the Chinese Remainder Theorem (CRT), which in general says that as long as the moduli are relative prime, then the system

a_1 x \equiv b_1 \mod m_1

a_2 x \equiv b_2 \mod m_2

\cdots

a_k x \equiv b_k \mod m_k

will always have a unique solution \mod m_1m_2 \ldots m_k. Note, this is two statements: there is a solution (statement 1), and the statement is unique up to modding by the product of this moduli (statement 2). Proof Sketch: Either of the two methods described above to solve that problem can lead to a proof here. But there is one big step that makes such a proof much easier. Once you’ve shown that the CRT is true for a system of two congruences (effectively meaning you can replace them by one congruence), this means that you can use induction. You can reduce the n+1st case to the nth case using your newfound knowledge of how to combine two equations into one. Then the inductive hypothesis carries out the proof.

Note also that it’s pretty easy to go backwards. If I know that x \equiv 12 \mod 30, then I know that x will also be the solution to the system

x \equiv 2 \mod 5

x \equiv 0 \mod 6

In fact, a higher view of the CRT reveals that the great strength is that considering a number mod a set of relatively prime moduli is the exact same (isomorphic to) considering a number mod the product of the moduli.

The remainder of this post will be about why the CRT is cool and useful.

Application 1: Multiplying Large Numbers

Firstly, the easier application. Suppose you have two really large integers a,b (by really large, I mean with tens or hundreds of digits at least – for concreteness, say they each have n digits). When a computer computes their product ab, it has to perform n^2 digit multiplications, which can be a whole lot if n is big. But a computer can calculate mods of numbers in something like \log n time, which is much much much faster. So one way to quickly compute the product of two really large numbers is to use the Chinese Remainder Theorem to represent each of a and b with a set of much smaller congruences. For example (though we’ll be using small numbers), say we want to multiply 12 by 21. We might represent 12 by 12 \equiv 2 \mod 5, 5 \mod 7, 1 \mod 11 and represent 21 by 21 \equiv 1 \mod 5, 0 \mod 7, 10 \mod 11. To find their product, calculate their product in each of the moduli: 2 \cdot 1 \equiv 2 \mod 5, 5 \cdot 0 \equiv 0 \mod 7, 1 \cdot 10 \equiv 10 \mod 11. We know we can get a solution  to the resulting system of congruences using the above algorithm, and the smallest positive solution will be the actual product.

This might not feel faster, but for much larger numbers, it really is. As an aside, here’s one way to make it play nice for parallel processing (which vastly makes things faster). After you’ve computed the congruences of 12 and 21 for the different moduli, send the numbers mod 5 to one computer, the numbers mod 7 to another, and the numbers mod 11 to a third (but also send each computer the list of moduli: 5,7,11). Each computer will calculate the product in their modulus and then use the Euclidean algorithm to calculate the inverse of the product of the other two moduli, and multiply these together.  Afterwards, the computers resend their data to a central computer, which just adds the result and takes it mod 5 \cdot 7 \cdot 11 (to get the smallest positive solution). Since mods are fast and all the multiplication is with smaller integers (no bigger than the largest mod, ever), it all goes faster. And since it’s parallelized, you’re replacing a hard task with a bunch of smaller easier tasks that can all be worked on at the same time. Very powerful stuff!

I have actually never seen someone give the optimal running time that would come from this sort of procedure, though I don’t know why. Perhaps I’ll look into that one day.

Application 2: Secret Sharing in Networks of People

This is really slick. Let’s lay out the situation: I have a secret. I want you, my students, to have access to the secret, but only if  at least six of you decide together that you want access. So I give each of you a message, consisting of a number and a modulus. Using the CRT, I can create a scheme where if any six of you decide you want to open the message, then you can pool your six bits together to get the message. Notice, I mean any six of you, instead of a designated set of six. Further, no five people can recover the message without a sixth in a reasonable amount of time. That’s pretty slick, right?

The basic idea is for me to encode my message as a number P (I use P to mean plain-text). Then I choose a set of moduli, one for each of you, but I choose them in such a way that the product of any 5 of them is smaller than P, but the product of any 6 of them is greater than P (what this means is that I choose a lot of primes or near-primes right around the same size, all right around the fifth root of P). To each of you, I give you the value of P \mod m_i and the modulus m_i, where m_i is your modulus. Since P is much bigger than m_i, it would take you a very long time to just happen across the correct multiple that reveals a message (if you ever managed). Now, once six of you get together and put your pieces together, the CRT guarantees a solution. Since the product of your six moduli will be larger than P, the smallest solution will be P. But if only five of you get together, since the product of your moduli is less than P, you don’t recover P. In this way, we have our secret sharing network.

To get an idea of the security of this protocol, you might imagine if I gave each of you moduli around the size of a quadrillion. Then missing any single person means there are hundreds of trillions of reasonable multiples of your partial plain-text to check before getting to the correct multiple.

A similar idea, but which doesn’t really use the CRT, is to consider the following problem: suppose two millionaires Alice and Bob (two people of cryptological fame) want to see which of them is richer, but without revealing how much wealth they actually have. This might sound impossible, but indeed it is not! There is a way for them to establish which one is richer but with neither knowing how much money the other has. Similar problems exist for larger parties (more than just 2 people), but none is more famous than the original: Yao’s Millionaire Problem.

Alright – I’ll see you all in class.

Notes on the first week (SummerNT) Monday, Jul 1 2013 

We’ve covered a lot of ground this first week! I wanted to provide a written summary, with partial proof, of what we have done so far.

We began by learning about proofs. We talked about direct proofs, inductive proofs, proofs by contradiction, and proofs by using the contrapositive of the statement we want to prove. A proof is a justification and argument based upon certain logical premises (which we call axioms); in contrast to other disciplines, a mathematical proof is completely logical and can be correct or incorrect.

We then established a set of axioms for the integers that would serve as the foundation of our exploration into the (often fantastic yet sometimes frustrating) realm of number theory. In short, the integers are a non-empty set with addition and multiplication [which are both associative, commutative, and have an identity, and which behave as we think they should behave; further, there are additive inverses], a total order [an integer is either bigger than, less than, or equal to any other integer, and it behaves like we think it should under addition and multiplication], and satisfying the deceptively important well ordering principle [every nonempty set of positive integers has a least element].

With this logical framework in place, we really began number theory in earnest. We talked about divisibility [we say that a divides b, written a \mid b, if b = ak for some integer k]. We showed that every number has a prime factorization. To do this, we used the well-ordering principle.

Suppose that not all integers have a prime factorization. Then there must be a smallest integer that does not have a prime factorization: call it n. Then we know that n is either a prime or a composite. If it’s prime, then it has a prime factorization. If it’s composite, then it factors as n = ab with a,b < n. But then we know that each of a, b have prime factorizations since they are less than n. Multiplying them together, we see that n also has a prime factorization after all. \diamondsuit

Our first major result is the following:

There are infinitely many primes

There are many proofs, and we saw 2 of them in class. For posterity, I’ll present three here.

First proof that there are infinitely many primes

Take a finite collection of primes, say p_1, p_2, \ldots, p_k. We will show that there is at least one more prime not mentioned in the collection. To see this, consider the number p_1 p_2 \ldots p_k + 1. We know that this number will factor into primes, but upon division by every prime in our collection, it leaves a remainder of 1. Thus it has at least one prime factor different than every factor in our collection. \diamondsuit

This was a common proof used in class. A pattern also quickly emerges: 2 + 1 = 3, a prime. 2\cdot3 + 1 = 7, a prime. 2 \cdot 3 \cdot 5 + 1 = 31, also a prime. It is always the case that a product of primes plus one is another prime? No, in fact. If you look at 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 + 1=30031 = 59\cdot 509, you get a nonprime.

Second proof that there are infinitely many primes

In a similar vein to the first proof, we will show that there is always a prime larger than n for any positive integer n. To see this, consider n! + 1. Upon dividing by any prime less than n, we get a remainder of 1. So all of its prime factors are larger than n, and so there are infinitely many primes. \diamondsuit

I would also like to present one more, which I’ve always liked.

Third proof that there are infinitely many primes

Suppose there are only finitely many primes p_1, \ldots, p_k. Then consider the two numbers n = p_1 \cdot \dots \cdot p_k and n -1. We know that n - 1 has a prime factor, so that it must share a factor P with n since n is the product of all the primes. But then P divides n - (n - 1) = 1, which is nonsense; no prime divides 1. Thus there are infinitely many primes. \diamondsuit

We also looked at modular arithmetic, often called the arithmetic of a clock. When we say that a \equiv b \mod m, we mean to say that m | (b - a), or equivalently that a = b + km for some integer m (can you show these are equivalent?). And we pronounce that statement as ” a is congruent to b mod m.” We played a lot with modular arithmetic: we added, subtracted, and multiplied many times, hopefully enough to build a bit of familiarity with the feel. In most ways, it feels like regular arithmetic. But in some ways, it’s different. Looking at the integers \mod m partitions the integers into a set of equivalence classes, i.e. into sets of integers that are congruent to 0 \mod m, 1 \mod m, \ldots. When we talk about adding or multiplying numbers mod \mod m, we’re really talking about manipulating these equivalence classes. (This isn’t super important to us – just a hint at what’s going on beneath the surface).

We expect that if a \equiv b \mod m, then we would also have ac \equiv bc \mod m for any integer c, and this is true (can you prove this?). But we would also expect that if we had ac \equiv bc \mod m, then we would necessarily have a \equiv b \mod m, i.e. that we can cancel out the same number on each side. And it turns out that’s not the case. For example, 4 \cdot 2 \equiv 4 \cdot 5 \mod 6 (both are 2 \mod 6), but ‘cancelling the fours’ says that 2 \equiv 5 \mod 6 – that’s simply not true. With this example in mind, we went about proving things about modular arithmetic. It’s important to know what one can and can’t do.

One very big and important observation that we noted is that it doesn’t matter what order we operate, as in it doesn’t matter if we multiply an expression out and then ‘mod it’ down, or ‘mod it down’ and then multiply, or if we intermix these operations. Knowing this allows us to simplify expressions like 11^4 \mod 12, since we know 11 \equiv -1 \mod 12, and we know that (-1)^2 \equiv 1 \mod 12, and so 11^4 \equiv (-1)^{2\cdot 2} \equiv 1 \mod 12. If we’d wanted to, we could have multiplied it out and then reduced – the choice is ours!

Amidst our exploration of modular arithmetic, we noticed some patterns. Some numbers  are invertible in the modular sense, while others are not. For example, 5 \cdot 5 \equiv 1 \mod 6, so in that sense, we might think of \frac{1}{5} \equiv 5 \mod 6. More interestingly but in the same vein, \frac{1}{2} \equiv 6 \mod 11 since 2 \cdot 6 \equiv 1 \mod 11. Stated more formally, a number a has a modular inverse a^{-1} \mod m if there is a solution to the modular equation ax \equiv 1 \mod m, in which case that solution is the modular inverse. When does this happen? Are these units special?

Returning to division, we think of the greatest common divisor. I showed you the Euclidean algorithm, and you managed to prove it in class. The Euclidean algorithm produces the greatest common divisor of a and b, and it looks like this (where I assume that b > 1:

b = q_1 a + r_1

a = q_2 r_1 + r_2

r_1 = q_3 r_2 + r_3

\cdots

r_k = q_{k+2}r_{k+1} + r_{k+2}

r_{k+1} = q_{k+3}r_{k+2} + 0

where in each step, we just did regular old division to guarantee a remainder r_i that was less than the divisor. As the divisors become the remainders, this yields a strictly decreasing remainder at each iteration, so it will terminate (in fact, it’s very fast). Further, using the notation from above, I claimed that the gcd of a and b was the last nonzero remainder, in this case r_{k+2}. How did we prove it?

Proof of Euclidean Algorithm

Suppose that d is a common divisor (such as the greatest common divisor) of a and b. Then d divides the left hand side of b - q_1 a = r_1, and thus must also divide the right hand side. So any divisor of a and b is also a divisor of r_1. This carries down the list, so that the gcd of a and b will divide each remainder term. How do we know that the last remainder is exactly the gcd, and no more? The way we proved it in class relied on the observation that r_{k+2} \mid r_{k+1}. But then r_{k+2} divides the right hand side of r_k = q_{k+2} r_{k+1} + r_{k+2}, and so it also divides the left. This also carries up the chain, so that r_{k+2} divides both a and b. So it is itself a divisor, and thus cannot be larger than the greatest common divisor. \diamondsuit

As an aside, I really liked the way it was proved in class. Great job!

The Euclidean algorithm can be turned backwards with back-substitution (some call this the extended Euclidean algorithm,) to give a solution in x,y to the equation ax + by = \gcd(a,b). This has played a super important role in our class ever since. By the way, though I never said it in class, we proved Bezout’s Identity along the way (which we just called part of the Extended Euclidean Algorithm). This essentially says that the gcd of a and b is the smallest number expressible in the form ax + by. The Euclidean algorithm has shown us that the gcd is expressible in this form. How do we know it’s the smallest? Observe again that if c is a common divisor of a and b, then c divides the left hand side of ax + by = d, and so c \mid d. So d cannot be smaller than the gcd.

This led us to explore and solve linear Diophantine equations of the form ax + by = c for general a,b,c. There will be solutions whenever the \gcd(a,b) \mid c, and in such cases there are infinitely many solutions (Do you remember how to see infinitely many other solutions?).

Linear Diophantine equations are very closely related a linear problems in modular arithmetic of the form ax \equiv c \mod m. In particular, this last modular equation is equivalent to ax + my = c for some y.(Can you show that these are the same?). Using what we’ve learned about linear Diophantine equations, we know that ax \equiv c \mod m has a solution iff \gcd(a,m) \mid c. But now, there are not infinitely many incongruent (i.e. not the same \mod m) solutions. This is called the ‘Linear Congruence Theorem,’ and is interestingly the first major result we’ve learned with no proof on wikipedia.

Theorem: the modular equation ax \equiv b \mod m has a solution iff \gcd(a,m) \mid b, in which case there are exactly \gcd(a,m) incongruent solutions.

Proof

We can translate a solution of ax \equiv b \mod m into a solution of ax + my = b, and vice-versa. So we know from the Extended Euclidean algorithm that there are only solutions if \gcd(a,m) \mid b. Now, let’s show that there are \gcd(a,m) solutions. I will do this a bit differently than how we did it in class.

First, let’s do the case when gcd(a,m)=1, and suppose we have a solution (x,y) so that ax + my = b. If there is another solution, then there is some perturbation we can do by shifting x by a number x' and y by a number y' that yields another solution looking like a(x + x') + m(y + y') = b. As we already know that ax + my = b, we can remove that from the equation. Then we get simply ax' = -my'. Since \gcd(m,a) = 1, we know (see below the proof) that m divides x'. But then the new solution x + x' \equiv x \mod m, so all solutions fall in the same congruence class – the same as x.

Now suppose that gcd(a,m) = d and that there is a solution. Since there is a solution, each of a,m, and b are divisible by d, and we can write them as a = da', b = db', m = dm'. Then the modular equation ax \equiv b \mod m is the same as da' x \equiv d b' \mod d m', which is the same as d m' \mid (d b' - d a'x). Note that in this last case, we can remove the d from both sides, so that m' \mid b' - a'x, or that a'x \equiv b \mod m'. From the first case, we know this has exactly one solution mod m', but we are interested in solutions mod m. Just as knowing that x \equiv 2 \mod 4 means that x might be 2, 6, 10 \mod 12 since 4 goes into 12 three times, m' goes into m d times, and this gives us our d incongruent solutions. \diamondsuit.

I mentioned that we used the fact that we’ve proven 3 times in class now in different forms: if \gcd(a,b) = 1 and a \mid bc, then we can conclude that a \mid c. Can you prove this? Can you prove this without using unique factorization? We actually used this fact to prove unique factorization (really we use the statement about primes: if p is a prime and p \mid ab, then we must have that p \mid a or p \mid b, or perhaps both). Do you remember how we proved that? We used the well-ordered principle to say that if there were a positive integer that couldn’t be uniquely factored, then there is a smaller one. But choosing two of its factorizations, and finding a prime on one side – we concluded that this prime divided the other side. Dividing both sides by this prime yielded a smaller (and therefore unique by assumption) factorization. This was the gist of the argument.

The last major bit of the week was the Chinese Remainder Theorem, which is awesome enough (and which I have enough to say about) that it will get its own post – which I’m working on now.

I’ll see you all in class tomorrow.

Next Page »

Follow

Get every new post delivered to your Inbox.

Join 745 other followers