An intuitive overview of Taylor series Sunday, Nov 17 2013 

This is a note written for my fall 2013 Math 100 class, but it was not written “for the exam,” nor does anything on here subtly hint at anything on any exam. But I hope that this will be helpful for anyone who wants to get a basic understanding of Taylor series. What I want to do is try to get some sort of intuitive grasp on Taylor series as approximations of functions. By intuitive, I mean intuitive to those with a good grasp of functions, the basics of a first semester of calculus (derivatives, integrals, the mean value theorem, and the fundamental theorem of calculus) – so it’s a mathematical intuition. In this way, this post is a sort of follow-up of my earlier note, An Intuitive Introduction to Calculus.

PLEASE NOTE that this note was written and typeset for my (now) main site: You can find it here. Because of this, some of the math shown here will display better there, where I have more control. This will also serve as the announcement that is now on an improved webhost and should be fast and fully operational. Now, back to the math.

We care about Taylor series because they allow us to approximate other functions in predictable ways. Sometimes, these approximations can be made to be very, very, very accurate without requiring too much computing power. You might have heard that computers/calculators routinely use Taylor series to calculate things like {e^x} (which is more or less often true). But up to this point in most students’ mathematical development, most mathematics has been clean and perfect; everything has been exact algorithms yielding exact answers for years and years. This is simply not the way of the world.

Here’s a fundamental fact to both mathematics and life: almost anything worth doing is probably pretty hard and pretty messy.

For a very recognizable example, let’s think about finding zeroes of polynomials. Finding roots of linear polynomials is very easy. If we see {5 + x = 0}, we see that {-5} is the zero. Similarly, finding roots of quadratic polynomials is very easy, and many of us have memorized the quadratic formula to this end. Thus {ax^2 + bx + c = 0} has solutions {x = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}}. These are both nice, algorithmic, and exact. But I will guess that the vast majority of those who read this have never seen a “cubic polynomial formula” for finding roots of cubic polynomials (although it does exist, it is horrendously messy – look up Cardano’s formula). There is even an algorithmic way of finding the roots of quartic polynomials. But here’s something amazing: there is no general method for finding the exact roots of 5th degree polynomials (or higher degree).

I don’t mean We haven’t found it yet, but there may be one, or even You’ll have to use one of these myriad ways – I mean it has been shown that there is no general method of finding exact roots of degree 5 or higher polynomials. But we certainly can approximate them arbitrarily well. So even something as simple as finding roots of polynomials, which we’ve been doing since we were in middle school, gets incredibly and unbelievably complicated.

So before we hop into Taylor series directly, I want to get into the mindset of approximating functions with other functions.

1. Approximating functions with other functions

We like working with polynomials because they’re so easy to calculate and manipulate. So sometimes we try to approximate complicated functions with polynomials, a problem sometimes called “polynomial interpolation”.

Suppose we wanted to approximate {\sin(x)}. The most naive approximation that we might do is see that {\sin(0) = 0}, so we might approximate {\sin(x)} by {p_0(x) = 0}. We know that it’s right at least once, and since {\sin(x)} is periodic, it’s going to be right many times. I write {p_0} to indicate that this is a degree {0} polynomial, that is, a constant polynomial. Clearly though, this is a terrible approximation, and we can do better.


Math 100: Week 4 Saturday, Sep 28 2013 

This is a post for my math 100 calculus class of fall 2013. In this post, I give the 4th week’s recitation worksheet (no solutions yet – I’m still writing them up). More pertinently, we will also go over the most recent quiz and common mistakes. Trig substitution, it turns out, is not so easy.

Before we hop into the details, I’d like to encourage you all to avail of each other, your professor, your ta, and the MRC in preparation for the first midterm (next week!).

1. The quiz

There were two versions of the quiz this week, but they were very similar. Both asked about a particular trig substitution

\displaystyle \int_3^6 \sqrt{36 - x^2} \mathrm{d} x

And the other was

\displaystyle \int_{2\sqrt 2}^4 \sqrt{16 - x^2} \mathrm{d}x.

They are very similar, so I’m only going to go over one of them. I’ll go over the first one. We know we are to use trig substitution. I see two ways to proceed: either draw a reference triangle (which I recommend), or think through the Pythagorean trig identities until you find the one that works here (which I don’t recommend).

We see a {\sqrt{36 - x^2}}, and this is hard to deal with. Let’s draw a right triangle that has {\sqrt{36 - x^2}} as a side. I’ve drawn one below. (Not fancy, but I need a better light).

In this picture, note that {\sin \theta = \frac{x}{6}}, or that {x = 6 \sin \theta}, and that {\sqrt{36 - x^2} = 6 \cos \theta}. If we substitute {x = 6 \sin \theta} in our integral, this means that we can replace our {\sqrt{36 - x^2}} with {6 \cos \theta}. But this is a substitution, so we need to think about {\mathrm{d} x} too. Here, {x = 6 \sin \theta} means that {\mathrm{d}x = 6 \cos \theta}.

Some people used the wrong trig substitution, meaning they used {x = \tan \theta} or {x = \sec \theta}, and got stuck. It’s okay to get stuck, but if you notice that something isn’t working, it’s better to try something else than to stare at the paper for 10 minutes. Other people use {x = 6 \cos \theta}, which is perfectly doable and parallel to what I write below.

Another common error was people forgetting about the {\mathrm{d}x} term entirely. But it’s important!.

Substituting these into our integral gives

\displaystyle \int_{?}^{??} 36 \cos^2 (\theta) \mathrm{d}\theta,

where I have included question marks for the limits because, as after most substitutions, they are different. You have a choice: you might go on and put everything back in terms of {x} before you give your numerical answer; or you might find the new limits now.

It’s not correct to continue writing down the old limits. The variable has changed, and we really don’t want {\theta} to go from {3} to {6}.

If you were to find the new limits, then you need to consider: if {x=3} and {\frac{x}{6} = \sin \theta}, then we want a {\theta} such that {\sin \theta = \frac{3}{6}= \frac{1}{2}}, so we might use {\theta = \pi/6}. Similarly, when {x = 6}, we want {\theta} such that {\sin \theta = 1}, like {\theta = \pi/2}. Note that these were two arcsine calculations, which we would have to do even if we waited until after we put everything back in terms of {x} to evaluate.

Some people left their answers in terms of these arcsines. As far as mistakes go, this isn’t a very serious one. But this is the sort of simplification that is expected of you on exams, quizzes, and homeworks. In particular, if something can be written in a much simpler way through the unit circle, then you should do it if you have the time.

So we could rewrite our integral as

\displaystyle \int_{\pi/6}^{\pi/2} 36 \cos^2 (\theta) \mathrm{d}\theta.

How do we integrate {\cos^2 \theta}? We need to make use of the identity {\cos^2 \theta = \dfrac{1 + \cos 2\theta}{2}}. You should know this identity for this midterm. Now we have

\displaystyle 36 \int_{\pi/6}^{\pi/2}\left(\frac{1}{2} + \frac{\cos 2 \theta}{2}\right) \mathrm{d}\theta = 18 \int_{\pi/6}^{\pi/2}\mathrm{d}\theta + 18 \int_{\pi/6}^{\pi/2}\cos 2\theta \mathrm{d}\theta.

The first integral is extremely simple and yields {6\pi} The second integral has antiderivative {\dfrac{\sin 2 \theta}{2}} (Don’t forget the {2} on bottom!), and we have to evaluate {\big[9 \sin 2 \theta \big]_{\pi/6}^{\pi/2}}, which gives {-\dfrac{9 \sqrt 3}{2}}. You should know the unit circle sufficiently well to evaluate this for your midterm.

And so the final answer is {6 \pi - \dfrac{9 \sqrt 2}{2} \approx 11.0553}. (You don’t need to be able to do that approximation).

Let’s go back a moment and suppose you didn’t re\”{e}valuate the limits once you substituted in {\theta}. Then, following the same steps as above, you’d be left with

\displaystyle 18 \int_{?}^{??}\mathrm{d}\theta + 18 \int_{?}^{??}\cos 2\theta \mathrm{d}\theta = \left[ 18 \theta \right]_?^{??} + \left[ 9 \sin 2 \theta \right]_?^{??}.

Since {\frac{x}{6} = \sin \theta}, we know that {\theta = \arcsin (x/6)}. This is how we evaluate the left integral, and we are left with {[18 \arcsin(x/6)]_3^6}. This means we need to know the arcsine of {1} and {\frac 12}. These are exactly the same two arcsine computations that I referenced above! Following them again, we get {6\pi} as the answer.

We could do the same for the second part, since {\sin ( 2 \arcsin (x/6))} when {x = 3} is {\sin (2 \arcsin \frac{1}{2} ) = \sin (2 \cdot \frac{\pi}{6} ) = \frac{\sqrt 3}{2}}; and when {x = 6} we get {\sin (2 \arcsin 1) = \sin (2 \cdot \frac{\pi}{2}) = \sin (\pi) = 0}.

Putting these together, we see that the answer is again {6\pi - \frac{9\sqrt 3}{2}}.

Or, throwing yet another option out there, we could do something else (a little bit wittier, maybe?). We have this {\sin 2\theta} term to deal with. You might recall that {\sin 2 \theta = 2 \sin \theta \cos \theta}, the so-called double-angle identity.

Then {9 \sin 2\theta = 18 \sin \theta \cos \theta}. Going back to our reference triangle, we know that {\cos \theta = \dfrac{\sqrt{36 - x^2}}{6}} and that {\sin \theta = \dfrac{x}{6}}. Putting these together,

\displaystyle 9 \sin 2 \theta = \dfrac{ x\sqrt{36 - x^2} }{2}.

When {x=6}, this is {0}. When {x = 3}, we have {\dfrac{ 3\sqrt {27}}{2} = \dfrac{9\sqrt 3}{2}}.

And fortunately, we get the same answer again at the end of the day. (phew).

2. The worksheet

Finally, here is the worksheet for the day. I’m working on their solutions, and I’ll have that up by late this evening (sorry for the delay).

Ending tidbits – when I was last a TA, I tried to see what were the good predictors of final grade. Some things weren’t very surprising – there is a large correlation between exam scores and final grade. Some things were a bit surprising – low homework scores correlated well with low final grade, but high homework scores didn’t really have a strong correlation with final grade at all; attendance also correlated weakly. But one thing that really stuck with me was the first midterm grade vs final grade in class: it was really strong. For a bit more on that, I refer you to my final post from my Math 90 posts.

Math 100: Week 3 and pre-midterm Tuesday, Sep 24 2013 

This is a post for my Math 100 class of fall 2013. In this post, I give the first three weeks’ worksheets from recitation and the set of solutions to week three’s worksheet, as well as a few administrative details.

Firstly, here is the recitation work from the first three weeks:

  1. (there was no recitation the first week)
  2. A worksheet focusing on review.
  3. A worksheet focusing on integration by parts and u-substitution, with solutions.

In addition, I’d like to remind you that I have office hours from 2-4pm (right now) in Kassar 018. I’ve had multiple people set up appointments with me outside of these hours, which I’m tempted to interpret as suggesting that I change when my office hours are. If you have a preference, let me know, and I’ll try to incorporate it.

Finally, there will be an exam next Tuesday. I’ve been getting a lot of emails about what material will be on the exam. The answer is that everything you have learned up to now and by the end of this week is fair game for exam material. This also means there could be exam questions on material that we have not discussed in recitation. So be prepared. However, I will be setting aside a much larger portion of recitation this Thursday for questions than normal. So come prepared with your questions.

Best of luck, and I’ll see you in class on Thursday.

An intuitive introduction to calculus Saturday, Sep 7 2013 

This is a post written for my fall 2013 Math 100 class but largely intended for anyone with knowledge of what a function is and a desire to know what calculus is all about. Calculus is made out to be the pinnacle of the high school math curriculum, and correspondingly is thought to be very hard. But the difficulty is bloated, blown out of proportion. In fact, the ideas behind calculus are approachable and even intuitive if thought about in the right way.

Many people managed to stumble across the page before I’d finished all the graphics. I’m sorry, but they’re all done now! I was having trouble interpreting how WordPress was going to handle my gif files – it turns out that they automagically resize them if you don’t make them of the correct size, which makes them not display. It took me a bit to realize this. I’d like to mention that this actually started as a 90 minute talk I had with my wife over coffee, so perhaps an alternate title would be “Learning calculus in 2 hours over a cup of coffee.”

So read on if you would like to understand what calculus is, or if you’re looking for a refresher of the concepts from a first semester in calculus (like for Math 100 students at Brown), or if you’re looking for a bird’s eye view of AP Calc AB subject material.

1. An intuitive and semicomplete introduction to calculus

We will think of a function {f(\cdot)} as something that takes an input {x} and gives out another number, which we’ll denote by {f(x)}. We know functions like {f(x) = x^2 + 1}, which means that if I give in a number {x} then the function returns the number {f(x) = x^2 + 1}. So I put in {1}, I get {1^2 + 1 = 2}, i.e. {f(1) = 2}. Primary and secondary school overly conditions students to think of functions in terms of a formula or equation. The important thing to remember is that a function is really just something that gives an output when given an input, and if the same input is given later then the function spits the same output out. As an aside, I should mention that the most common problem I’ve seen in my teaching and tutoring is a fundamental misunderstanding of functions and their graphs

For a function that takes in and spits out numbers, we can associate a graph. A graph is a two-dimensional representation of our function, where by convention the input is put on the horizontal axis and the output is put on the vertical axis. Each axis is numbered, and in this way we can identify any point in the graph by its coordinates, i.e. its horizontal and vertical position. A graph of a function {f(x)} includes a point {(x,y)} if {y = f(x)}.

The graph of the function x^2 + 1 is in blue. The emphasized point appears on the graph because it is of the form (x, f(x)). In particular, this point is (1, 2).

Thus each point on the graph is really of the form {(x, f(x))}. A large portion of algebra I and II is devoted to being able to draw graphs for a variety of functions. And if you think about it, graphs contain a huge amount of information. Graphing {f(x)= x^2 + 1} involves drawing an upwards-facing parabola, which really represents an infinite number of points. That’s pretty intense, but it’s not what I want to focus on here.

1.1. Generalizing slope – introducing the derivative

You might recall the idea of the ‘slope’ of a line. A line has a constant ratio of how much the {y} value changes for a specific change in {x}, which we call the slope (people always seem to remember rise over run). In particular, if a line passes through the points {(x_1, y_1)} and {(x_2, y_2)}, then its slope will be the vertical change {y_2 - y_1} divided by the horizontal change {x_2 - x_1}, or {\dfrac{y_2 - y_1}{x_2 - x_1}}.

The graph of a line appears in blue. The two points (0,1) and (1,3) are shown on the line. The horizontal red line shows the horizontal change. The vertical red line shows the vertical change. The ‘slope’ of the blue line is the length of the vertical red line divided by the length of the horizontal red line.

So if the line is given by an equation {f(x) = \text{something}}, then the slope from two inputs {x_1} and {x_2} is {\dfrac{f(x_2) - f(x_1)}{x_2 - x_1}}. As an aside, for those that remember things like the ‘standard equation’ {y = mx + b} or ‘point-slope’ {(y - y_0) = m(x - x_0)} but who have never thought or been taught where these come from: the claim that lines are the curves of constant slope is saying that for any choice of {(x_1, y_1)} on the line, we expect {\dfrac{y_2 - y_1}{x_2 - x_1} = m} a constant, which I denote by {m} for no particularly good reason other than the fact that some textbook author long ago did such a thing. Since we’re allowing ourselves to choose any {(x_1, y_1)}, we might drop the subscripts – since they usually mean a constant – and rearrange our equation to give {y_2 - y = m(x_2 - x)}, which is what has been so unkindly drilled into students’ heads as the ‘point-slope form.’ This is why lines have a point-slope form, and a reason that it comes up so much is that it comes so naturally from the defining characteristic of a line, i.e. constant slope.

But one cannot speak of the ‘slope’ of a parabola.

The parabola f(x) = x^2 + 1 is shows in blue. Slope is a measure of how much the function f(x) changes when x is changed. Some tangent lines to the parabola are shown in red. The slope of each line seems like it should be the ‘slope’ of the parabola when the line touches the parabola, but these slopes are different.

Intuitively, we look at our parabola {x^2 + 1} and see that the ‘slope,’ or an estimate of how much the function {f(x)} changes with a change in {x}, seems to be changing depending on what {x} values we choose. (This should make sense – if it didn’t change, and had constant slope, then it would be a line). The first major goal of calculus is to come up with an idea of a ‘slope’ for non-linear functions. I should add that we already know a sort of ‘instantaneous rate of change’ of a nonlinear function. When we’re in a car and we’re driving somewhere, we’re usually speeding up or slowing down, and our pace isn’t usually linear. Yet our speedometer still manages to say how fast we’re going, which is an immediate rate of change. So if we had a function {p(t)} that gave us our position at a time {t}, then the slope would give us our velocity (change in position per change in time) at a moment. So without knowing it, we’re familiar with a generalized slope already. Now in our parabola, we don’t expect a constant slope, so we want to associate a ‘slope’ to each input {x}. In other words, we want to be able to understand how rapidly the function {f(x)} is changing at each {x}, analogous to how the slope {m} of a line {g(x) = mx + b} tells us that if we change our input by an amount {h} then our output value will change by {mh}.

How does calculus do that? The idea is to get closer and closer approximations. Suppose we want to find the ‘slope’ of our parabola at the point {x = 1}. Let’s get an approximate answer. The slope of the line coming from inputs {x = 1} and {x = 2} is a (poor) approximation. In particular, since we’re working with {f(x) = x^2 + 1}, we have that {f(2) = 5} and {f(1) = 2}, so that the ‘approximate slope’ from {x = 1} and {x = 2} is {\frac{5 - 2}{2 - 1} = 3}. But looking at the graph,

The parabola x^2 + 1 is shown in blue, and the line going through the points (1,2) and (2,5) is shown. The line immediately goes above and crosses the parabola, so it seems like this line is rising faster (changing faster) than the parabola. It’s too steep, and the slope is too high to reflect the ‘slope’ of the parabola at the indicated point.

we see that it feels like this slope is too large. So let’s get closer. Suppose we use inputs {x = 1} and {x = 1.5}. We get that the approximate slope is {\frac{3.25 - 2}{1.5 - 1} = 2.5}. If we were to graph it, this would also feel too large. So we can keep choosing smaller and smaller changes, like using {x = 1} and {x = 1.1}, or {x = 1} and {x = 1.01}, and so on. This next graphic contains these approximations, with chosen points getting closer and closer to {1}.

The parabola x^2 + 1 is shown in blue. Two points are chosen on the parabola and the line between them is drawn in red. As the points get closer to each other, the red line indicates the rate of growth of the parabola at the point (1,2) better and better. So the slope of the red lines seems to be getting closer to the ‘slope’ of the parabola at (1,2).

Let’s look a little closer at the values we’re getting for our slopes when we use {1} and {2, 1.5, 1.1, 1.01, 1.001} as our inputs. We get

\displaystyle \begin{array}{c|c} \text{second input} & \text{approx. slope} \\ \hline 2 & 3 \\ 1.5 & 2.5 \\ 1.1 & 2.1 \\ 1.01 & 2.01 \\ 1.001 & 2.001 \end{array}

It looks like the approximate slopes are approaching {2}. What if we plot the graph with a line of slope {2} going through the point {(1,2)}?

The parabola x^2 + 1 is shown in blue. The line in red has slope 2 and goes through the point (1,2). We got this line by continuing the successive approximations done above. It looks like it accurately indicates the ‘slope’ of the parabola at (1,2).

It looks great! Let’s zoom in a whole lot.

When we zoom in, the blue parabola looks almost like a line, and the red line looks almost like the parabola! This is why we are measuring the ‘slope’ of the parabola in this fashion – when we zoom in, it looks more and more like a line, and we are getting the slope of that line.

That looks really close! In fact, what I’ve been allowing as the natural feeling slope, or local rate of change, is really the line tangent to the graph of our function at the point {(1, f(1))}. In a calculus class, you’ll spend a bit of time making sense of what it means for the approximate slopes to ‘approach’ {2}. This is called a ‘limit,’ and the details are not important to us right now. The important thing is that this let us get an idea of a ‘slope’ at a point on a parabola. It’s not really a slope, because a parabola isn’t a line. So we’ve given it a different name – we call this ‘the derivative.’ So the derivative of {f(x) = x^2 + 1} at {x = 1} is {2}, i.e. right around {x = 1} we expect a rate of change of {2}, so that we expect {f(1 + h) - f(1) \approx 2h}. If you think about it, we’re saying that we can approximate {f(x) = x^2 + 1} near the point {(1, 2)} by the line shown in the graph above: this line passes through {(1,2)} and it’s slope is {2}, what we’re calling the slope of {f(x) = x^2 + 1} at {x = 1}.

Let’s generalize. We were able to speak of the derivative at one point, but how about other points? The rest of this post is below the ‘more’ tag below.


Twenty Mathematicians, Two Hard Problems, One Week, IdeaLab2013 Friday, Aug 2 2013 

July has been an exciting and busy month for me. I taught number theory 3 hours a day, 5 days a week, for 3 weeks to (mostly) devoted and motivated high school students in the Summer@Brown program. In the middle, I moved to Massachusetts. Immediately after the Summer@Brown program ended, I was given the opportunity to return to ICERM to participate in an experimental program called an IdeaLab.

IdeaLab invited 20 early career mathematicians to come together for a week and to generate ideas on two very different problems: Tipping Points in Climate Systems and Efficient Fully Homomorphic Encryption. Although I plan on writing a bit more about each of these problems and the IdeaLab process in action (at least from my point of view), I should say something about what these are.

Models of Earth’s climate are used all the time, to give daily weather reports, to predict and warn about hurricanes, to attempt to understand the effects of anthropogenic sources of carbon on long-term climate. As we know from uncertainty about weather reports, these models aren’t perfect. In particular, they don’t currently predict sudden, abrupt changes called ‘Tippling points.’ But are tipping points possible? There have been warm periods following ice-ages in the past, so it seems that there might be tipping points that aren’t modelled in the system. Understanding these form the basis for the idea behind the Tipping Points in Climate Systems project. This project also forms another link in Mathematics of Planet Earth.

On the other hand, homomorphic encryption is a topic in modern cryptography. To encrypt a message is to make it hard or impossible for others to read it unless they have a ‘key.’ You might think that you wouldn’t want someone holding onto an encrypted data to be able to do anything with the data, and in most modern encryption algorithms this is the case. But what if we were able to give Google an encrypted dataset and ask them to perform a search on it? Is it possible to have a secure encryption that would allow Google to do some sort of search algorithm and give us the results, but without Google ever understanding the data itself? It may seem far-fetched, but this is exactly the idea behind the Efficient Fully Homomorphic Encryption group. Surprisingly enough, it is possible. But known methods are obnoxiously slow and infeasible. This is why the group was after ‘efficient’ encryption.

So 20 early career mathematicians from all sorts of areas of mathematics gathered to think about these two questions. For the rest of this post, I’d like to talk about the structure and my thoughts on the IdeaLab process. In later posts, I’ll talk about each of the two major topics and what sorts of ideas came out of the process.


Chinese Remainder Theorem (SummerNT) Tuesday, Jul 9 2013 

This post picks up from the previous post on Summer@Brown number theory from 2013.

Now that we’d established ideas about solving the modular equation ax \equiv c \mod m, solving the linear diophantine equation ax + by = c, and about general modular arithmetic, we began to explore systems of modular equations. That is, we began to look at equations like

Suppose x satisfies the following three modular equations (rather, the following system of linear congruences):

x \equiv 1 \mod 5

x \equiv 2 \mod 7

x \equiv 3 \mod 9

Can we find out what x is? This is a clear parallel to solving systems of linear equations, as is usually done in algebra I or II in secondary school. A common way to solve systems of linear equations is to solve for a variable and substitute it into the next equation. We can do something similar here.

From the first equation, we know that x = 1 + 5a for some a. Substituting this into the second equation, we get that 1 + 5a \equiv 2 \mod 7, or that 5a \equiv 1 \mod 7. So a will be the modular inverse of 5 \mod 7. A quick calculation (or a slightly less quick Euclidean algorithm in the general case) shows that the inverse is 3. Multiplying both sides by 3 yields a \equiv 3 \mod 7, or rather that a = 3 + 7b for some b. Back substituting, we see that this means that x = 1+5a = 1+5(3+7b), or that x = 16 + 35b.

Now we repeat this work, using the third equation. 16 + 35b \equiv 3 \mod 9, so that 8b \equiv 5 \mod 9. Another quick calculation (or Euclidean algorithm) shows that this means b \equiv 4 \mod 9, or rather b = 4 + 9c for some c. Putting this back into x yields the final answer:

x = 16 + 35(4 + 9c) = 156 + 315c

x \equiv 156 \mod 315

And if you go back and check, you can see that this works. \diamondsuit

There is another, very slick, method as well. This was a clever solution mentioned in class. The idea is to construct a solution directly. The way we’re going to do this is to set up a sum, where each part only contributes to one of the three modular equations. In particular, note that if we take something like 7 \cdot 9 \cdot [7\cdot9]_5^{-1}, where this inverse means the modular inverse with respect to 5, then this vanishes mod 7 and mod 9, but gives 1 \mod 5. Similarly 2\cdot 5 \cdot 9 \cdot [5\cdot9]_7^{-1} vanishes mod 5 and mod 9 but leaves the right remainder mod 2, and 5 \cdot 7 \cdot [5\cdot 7]_9^{-1} vanishes mod 5 and mod 7, but leaves the right remainder mod 9.

Summing them together yields a solution (Do you see why?). The really nice thing about this algorithm to get the solution is that is parallelizes really well, meaning that you can give different computers separate problems, and then combine the things together to get the final answer. This is going to come up again later in this post.

These are two solutions that follow along the idea of the Chinese Remainder Theorem (CRT), which in general says that as long as the moduli are relative prime, then the system

a_1 x \equiv b_1 \mod m_1

a_2 x \equiv b_2 \mod m_2


a_k x \equiv b_k \mod m_k

will always have a unique solution \mod m_1m_2 \ldots m_k. Note, this is two statements: there is a solution (statement 1), and the statement is unique up to modding by the product of this moduli (statement 2). Proof Sketch: Either of the two methods described above to solve that problem can lead to a proof here. But there is one big step that makes such a proof much easier. Once you’ve shown that the CRT is true for a system of two congruences (effectively meaning you can replace them by one congruence), this means that you can use induction. You can reduce the n+1st case to the nth case using your newfound knowledge of how to combine two equations into one. Then the inductive hypothesis carries out the proof.

Note also that it’s pretty easy to go backwards. If I know that x \equiv 12 \mod 30, then I know that x will also be the solution to the system

x \equiv 2 \mod 5

x \equiv 0 \mod 6

In fact, a higher view of the CRT reveals that the great strength is that considering a number mod a set of relatively prime moduli is the exact same (isomorphic to) considering a number mod the product of the moduli.

The remainder of this post will be about why the CRT is cool and useful.

Application 1: Multiplying Large Numbers

Firstly, the easier application. Suppose you have two really large integers a,b (by really large, I mean with tens or hundreds of digits at least – for concreteness, say they each have n digits). When a computer computes their product ab, it has to perform n^2 digit multiplications, which can be a whole lot if n is big. But a computer can calculate mods of numbers in something like \log n time, which is much much much faster. So one way to quickly compute the product of two really large numbers is to use the Chinese Remainder Theorem to represent each of a and b with a set of much smaller congruences. For example (though we’ll be using small numbers), say we want to multiply 12 by 21. We might represent 12 by 12 \equiv 2 \mod 5, 5 \mod 7, 1 \mod 11 and represent 21 by 21 \equiv 1 \mod 5, 0 \mod 7, 10 \mod 11. To find their product, calculate their product in each of the moduli: 2 \cdot 1 \equiv 2 \mod 5, 5 \cdot 0 \equiv 0 \mod 7, 1 \cdot 10 \equiv 10 \mod 11. We know we can get a solution  to the resulting system of congruences using the above algorithm, and the smallest positive solution will be the actual product.

This might not feel faster, but for much larger numbers, it really is. As an aside, here’s one way to make it play nice for parallel processing (which vastly makes things faster). After you’ve computed the congruences of 12 and 21 for the different moduli, send the numbers mod 5 to one computer, the numbers mod 7 to another, and the numbers mod 11 to a third (but also send each computer the list of moduli: 5,7,11). Each computer will calculate the product in their modulus and then use the Euclidean algorithm to calculate the inverse of the product of the other two moduli, and multiply these together.  Afterwards, the computers resend their data to a central computer, which just adds the result and takes it mod 5 \cdot 7 \cdot 11 (to get the smallest positive solution). Since mods are fast and all the multiplication is with smaller integers (no bigger than the largest mod, ever), it all goes faster. And since it’s parallelized, you’re replacing a hard task with a bunch of smaller easier tasks that can all be worked on at the same time. Very powerful stuff!

I have actually never seen someone give the optimal running time that would come from this sort of procedure, though I don’t know why. Perhaps I’ll look into that one day.

Application 2: Secret Sharing in Networks of People

This is really slick. Let’s lay out the situation: I have a secret. I want you, my students, to have access to the secret, but only if  at least six of you decide together that you want access. So I give each of you a message, consisting of a number and a modulus. Using the CRT, I can create a scheme where if any six of you decide you want to open the message, then you can pool your six bits together to get the message. Notice, I mean any six of you, instead of a designated set of six. Further, no five people can recover the message without a sixth in a reasonable amount of time. That’s pretty slick, right?

The basic idea is for me to encode my message as a number P (I use P to mean plain-text). Then I choose a set of moduli, one for each of you, but I choose them in such a way that the product of any 5 of them is smaller than P, but the product of any 6 of them is greater than P (what this means is that I choose a lot of primes or near-primes right around the same size, all right around the fifth root of P). To each of you, I give you the value of P \mod m_i and the modulus m_i, where m_i is your modulus. Since P is much bigger than m_i, it would take you a very long time to just happen across the correct multiple that reveals a message (if you ever managed). Now, once six of you get together and put your pieces together, the CRT guarantees a solution. Since the product of your six moduli will be larger than P, the smallest solution will be P. But if only five of you get together, since the product of your moduli is less than P, you don’t recover P. In this way, we have our secret sharing network.

To get an idea of the security of this protocol, you might imagine if I gave each of you moduli around the size of a quadrillion. Then missing any single person means there are hundreds of trillions of reasonable multiples of your partial plain-text to check before getting to the correct multiple.

A similar idea, but which doesn’t really use the CRT, is to consider the following problem: suppose two millionaires Alice and Bob (two people of cryptological fame) want to see which of them is richer, but without revealing how much wealth they actually have. This might sound impossible, but indeed it is not! There is a way for them to establish which one is richer but with neither knowing how much money the other has. Similar problems exist for larger parties (more than just 2 people), but none is more famous than the original: Yao’s Millionaire Problem.

Alright – I’ll see you all in class.

Notes on the first week (SummerNT) Monday, Jul 1 2013 

We’ve covered a lot of ground this first week! I wanted to provide a written summary, with partial proof, of what we have done so far.

We began by learning about proofs. We talked about direct proofs, inductive proofs, proofs by contradiction, and proofs by using the contrapositive of the statement we want to prove. A proof is a justification and argument based upon certain logical premises (which we call axioms); in contrast to other disciplines, a mathematical proof is completely logical and can be correct or incorrect.

We then established a set of axioms for the integers that would serve as the foundation of our exploration into the (often fantastic yet sometimes frustrating) realm of number theory. In short, the integers are a non-empty set with addition and multiplication [which are both associative, commutative, and have an identity, and which behave as we think they should behave; further, there are additive inverses], a total order [an integer is either bigger than, less than, or equal to any other integer, and it behaves like we think it should under addition and multiplication], and satisfying the deceptively important well ordering principle [every nonempty set of positive integers has a least element].

With this logical framework in place, we really began number theory in earnest. We talked about divisibility [we say that a divides b, written a \mid b, if b = ak for some integer k]. We showed that every number has a prime factorization. To do this, we used the well-ordering principle.

Suppose that not all integers have a prime factorization. Then there must be a smallest integer that does not have a prime factorization: call it n. Then we know that n is either a prime or a composite. If it’s prime, then it has a prime factorization. If it’s composite, then it factors as n = ab with a,b < n. But then we know that each of a, b have prime factorizations since they are less than n. Multiplying them together, we see that n also has a prime factorization after all. \diamondsuit

Our first major result is the following:

There are infinitely many primes

There are many proofs, and we saw 2 of them in class. For posterity, I’ll present three here.

First proof that there are infinitely many primes

Take a finite collection of primes, say p_1, p_2, \ldots, p_k. We will show that there is at least one more prime not mentioned in the collection. To see this, consider the number p_1 p_2 \ldots p_k + 1. We know that this number will factor into primes, but upon division by every prime in our collection, it leaves a remainder of 1. Thus it has at least one prime factor different than every factor in our collection. \diamondsuit

This was a common proof used in class. A pattern also quickly emerges: 2 + 1 = 3, a prime. 2\cdot3 + 1 = 7, a prime. 2 \cdot 3 \cdot 5 + 1 = 31, also a prime. It is always the case that a product of primes plus one is another prime? No, in fact. If you look at 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 + 1=30031 = 59\cdot 509, you get a nonprime.

Second proof that there are infinitely many primes

In a similar vein to the first proof, we will show that there is always a prime larger than n for any positive integer n. To see this, consider n! + 1. Upon dividing by any prime less than n, we get a remainder of 1. So all of its prime factors are larger than n, and so there are infinitely many primes. \diamondsuit

I would also like to present one more, which I’ve always liked.

Third proof that there are infinitely many primes

Suppose there are only finitely many primes p_1, \ldots, p_k. Then consider the two numbers n = p_1 \cdot \dots \cdot p_k and n -1. We know that n - 1 has a prime factor, so that it must share a factor P with n since n is the product of all the primes. But then P divides n - (n - 1) = 1, which is nonsense; no prime divides 1. Thus there are infinitely many primes. \diamondsuit

We also looked at modular arithmetic, often called the arithmetic of a clock. When we say that a \equiv b \mod m, we mean to say that m | (b - a), or equivalently that a = b + km for some integer m (can you show these are equivalent?). And we pronounce that statement as ” a is congruent to b mod m.” We played a lot with modular arithmetic: we added, subtracted, and multiplied many times, hopefully enough to build a bit of familiarity with the feel. In most ways, it feels like regular arithmetic. But in some ways, it’s different. Looking at the integers \mod m partitions the integers into a set of equivalence classes, i.e. into sets of integers that are congruent to 0 \mod m, 1 \mod m, \ldots. When we talk about adding or multiplying numbers mod \mod m, we’re really talking about manipulating these equivalence classes. (This isn’t super important to us – just a hint at what’s going on beneath the surface).

We expect that if a \equiv b \mod m, then we would also have ac \equiv bc \mod m for any integer c, and this is true (can you prove this?). But we would also expect that if we had ac \equiv bc \mod m, then we would necessarily have a \equiv b \mod m, i.e. that we can cancel out the same number on each side. And it turns out that’s not the case. For example, 4 \cdot 2 \equiv 4 \cdot 5 \mod 6 (both are 2 \mod 6), but ‘cancelling the fours’ says that 2 \equiv 5 \mod 6 – that’s simply not true. With this example in mind, we went about proving things about modular arithmetic. It’s important to know what one can and can’t do.

One very big and important observation that we noted is that it doesn’t matter what order we operate, as in it doesn’t matter if we multiply an expression out and then ‘mod it’ down, or ‘mod it down’ and then multiply, or if we intermix these operations. Knowing this allows us to simplify expressions like 11^4 \mod 12, since we know 11 \equiv -1 \mod 12, and we know that (-1)^2 \equiv 1 \mod 12, and so 11^4 \equiv (-1)^{2\cdot 2} \equiv 1 \mod 12. If we’d wanted to, we could have multiplied it out and then reduced – the choice is ours!

Amidst our exploration of modular arithmetic, we noticed some patterns. Some numbers  are invertible in the modular sense, while others are not. For example, 5 \cdot 5 \equiv 1 \mod 6, so in that sense, we might think of \frac{1}{5} \equiv 5 \mod 6. More interestingly but in the same vein, \frac{1}{2} \equiv 6 \mod 11 since 2 \cdot 6 \equiv 1 \mod 11. Stated more formally, a number a has a modular inverse a^{-1} \mod m if there is a solution to the modular equation ax \equiv 1 \mod m, in which case that solution is the modular inverse. When does this happen? Are these units special?

Returning to division, we think of the greatest common divisor. I showed you the Euclidean algorithm, and you managed to prove it in class. The Euclidean algorithm produces the greatest common divisor of a and b, and it looks like this (where I assume that b > 1:

b = q_1 a + r_1

a = q_2 r_1 + r_2

r_1 = q_3 r_2 + r_3


r_k = q_{k+2}r_{k+1} + r_{k+2}

r_{k+1} = q_{k+3}r_{k+2} + 0

where in each step, we just did regular old division to guarantee a remainder r_i that was less than the divisor. As the divisors become the remainders, this yields a strictly decreasing remainder at each iteration, so it will terminate (in fact, it’s very fast). Further, using the notation from above, I claimed that the gcd of a and b was the last nonzero remainder, in this case r_{k+2}. How did we prove it?

Proof of Euclidean Algorithm

Suppose that d is a common divisor (such as the greatest common divisor) of a and b. Then d divides the left hand side of b - q_1 a = r_1, and thus must also divide the right hand side. So any divisor of a and b is also a divisor of r_1. This carries down the list, so that the gcd of a and b will divide each remainder term. How do we know that the last remainder is exactly the gcd, and no more? The way we proved it in class relied on the observation that r_{k+2} \mid r_{k+1}. But then r_{k+2} divides the right hand side of r_k = q_{k+2} r_{k+1} + r_{k+2}, and so it also divides the left. This also carries up the chain, so that r_{k+2} divides both a and b. So it is itself a divisor, and thus cannot be larger than the greatest common divisor. \diamondsuit

As an aside, I really liked the way it was proved in class. Great job!

The Euclidean algorithm can be turned backwards with back-substitution (some call this the extended Euclidean algorithm,) to give a solution in x,y to the equation ax + by = \gcd(a,b). This has played a super important role in our class ever since. By the way, though I never said it in class, we proved Bezout’s Identity along the way (which we just called part of the Extended Euclidean Algorithm). This essentially says that the gcd of a and b is the smallest number expressible in the form ax + by. The Euclidean algorithm has shown us that the gcd is expressible in this form. How do we know it’s the smallest? Observe again that if c is a common divisor of a and b, then c divides the left hand side of ax + by = d, and so c \mid d. So d cannot be smaller than the gcd.

This led us to explore and solve linear Diophantine equations of the form ax + by = c for general a,b,c. There will be solutions whenever the \gcd(a,b) \mid c, and in such cases there are infinitely many solutions (Do you remember how to see infinitely many other solutions?).

Linear Diophantine equations are very closely related a linear problems in modular arithmetic of the form ax \equiv c \mod m. In particular, this last modular equation is equivalent to ax + my = c for some y.(Can you show that these are the same?). Using what we’ve learned about linear Diophantine equations, we know that ax \equiv c \mod m has a solution iff \gcd(a,m) \mid c. But now, there are not infinitely many incongruent (i.e. not the same \mod m) solutions. This is called the ‘Linear Congruence Theorem,’ and is interestingly the first major result we’ve learned with no proof on wikipedia.

Theorem: the modular equation ax \equiv b \mod m has a solution iff \gcd(a,m) \mid b, in which case there are exactly \gcd(a,m) incongruent solutions.


We can translate a solution of ax \equiv b \mod m into a solution of ax + my = b, and vice-versa. So we know from the Extended Euclidean algorithm that there are only solutions if \gcd(a,m) \mid b. Now, let’s show that there are \gcd(a,m) solutions. I will do this a bit differently than how we did it in class.

First, let’s do the case when gcd(a,m)=1, and suppose we have a solution (x,y) so that ax + my = b. If there is another solution, then there is some perturbation we can do by shifting x by a number x' and y by a number y' that yields another solution looking like a(x + x') + m(y + y') = b. As we already know that ax + my = b, we can remove that from the equation. Then we get simply ax' = -my'. Since \gcd(m,a) = 1, we know (see below the proof) that m divides x'. But then the new solution x + x' \equiv x \mod m, so all solutions fall in the same congruence class – the same as x.

Now suppose that gcd(a,m) = d and that there is a solution. Since there is a solution, each of a,m, and b are divisible by d, and we can write them as a = da', b = db', m = dm'. Then the modular equation ax \equiv b \mod m is the same as da' x \equiv d b' \mod d m', which is the same as d m' \mid (d b' - d a'x). Note that in this last case, we can remove the d from both sides, so that m' \mid b' - a'x, or that a'x \equiv b \mod m'. From the first case, we know this has exactly one solution mod m', but we are interested in solutions mod m. Just as knowing that x \equiv 2 \mod 4 means that x might be 2, 6, 10 \mod 12 since 4 goes into 12 three times, m' goes into m d times, and this gives us our d incongruent solutions. \diamondsuit.

I mentioned that we used the fact that we’ve proven 3 times in class now in different forms: if \gcd(a,b) = 1 and a \mid bc, then we can conclude that a \mid c. Can you prove this? Can you prove this without using unique factorization? We actually used this fact to prove unique factorization (really we use the statement about primes: if p is a prime and p \mid ab, then we must have that p \mid a or p \mid b, or perhaps both). Do you remember how we proved that? We used the well-ordered principle to say that if there were a positive integer that couldn’t be uniquely factored, then there is a smaller one. But choosing two of its factorizations, and finding a prime on one side – we concluded that this prime divided the other side. Dividing both sides by this prime yielded a smaller (and therefore unique by assumption) factorization. This was the gist of the argument.

The last major bit of the week was the Chinese Remainder Theorem, which is awesome enough (and which I have enough to say about) that it will get its own post – which I’m working on now.

I’ll see you all in class tomorrow.

A proof from the first sheet (SummerNT) Monday, Jun 24 2013 

In class today, we were asked to explain what was wrong with the following proof:

Claim: As x increases, the function

\displaystyle f(x)=\frac{100x^2+x^2\sin(1/x)+50000}{100x^2}

approaches (gets arbitrarily close to) 1.

Proof: Look at values of f(x) as x gets larger and larger.

f(5) \approx 21.002
f(10)\approx 6.0010
f(25)\approx 1.8004
f(50)\approx 1.2002
f(100) \approx 1.0501
f(500) \approx 1.0020

These values are clearly getting closer to 1. QED

Of course, this is incorrect. Choosing a couple of numbers and thinking there might be a pattern does not constitute a proof.

But on a related note, these sorts of questions (where you observe a pattern and seek to prove it) can sometimes lead to strongly suspected conjectures, which may or may not be true. Here’s an interesting one (with a good picture over at SpikedMath):

Draw 2 points on the circumference of a circle, and connect them with a line. How many regions is the circle divided into? (two). Draw another point, and connect it to the previous points with a line. How many regions are there now? Draw another point, connecting to the previous points with lines. How many regions now? Do this once more. Do you see the pattern? You might even begin to formulate a belief as to why it’s true.

But then draw one more point and its lines, and carefully count the number of regions formed in the circle. How many circles now? (It doesn’t fit the obvious pattern).

So we know that the presented proof is incorrect. But lets say we want to know if the statement is true. How can we prove it? Further, we want to prove it without calculus – we are interested in an elementary proof. How should we proceed?

Firstly, we should say something about radians. Recall that at an angle \theta (in radians) on the unit circle, the arc-length subtended by the angle \theta is exactly \theta (in fact, this is the defining attribute of radians). And the value \sin \theta is exactly the height, or rather the y value, of the part of the unit circle at angle \theta. It’s annoying to phrase, so we look for clarification at the hastily drawn math below:

Screenshot from 2013-06-24 12:30:53

The arc length subtended by theta has length theta. The value of sin theta is the length of the vertical line in black.

Note in particular that the arc length is longer than the value of \sin \theta, so that \sin \theta < \theta. (This relies critically on the fact that the angle is positive). Further, we see that this is always true for small, positive \theta. So it will be true that for large, positive x, we’ll have \sin \frac{1}{x} < \frac{1}{x}. For those of you who know a bit more calculus, you might know that in fact, \sin(\frac{1}{x}) = \frac{1}{x} - \frac{1}{x^33!} + O(\frac{1}{t^5}), which is a more precise statement.

What do we do with this? Well, I say that this allow us to finish the proof.

\dfrac{100x^2 + x^2 \sin(1/x) + 50000}{100x^2} \leq \dfrac{100x^2 + x + 50000}{100x^2} = 1 + \dfrac{1}{100x} + \dfrac{50000}{100x^2}

, and it is clear that the last two terms go to zero as x increases. \spadesuit

Finally, I’d like to remind you about the class webpage at the left – I’ll see you tomorrow in class.

Recent developments in Twin Primes, Goldbach, and Open Access Tuesday, May 21 2013 

It has been a busy two weeks all over the math community. Well, at least it seemed so to me. Some of my friends have defended their theses and need only to walk to receive their PhDs; I completed my topics examination, Brown’s take on an oral examination; and I’ve given a trio of math talks.

Meanwhile, there have been developments in a relative of the Twin Primes conjecture, the Goldbach conjecture, and Open Access math journals.

1. Twin Primes Conjecture

The Twin Primes Conjecture states that there are infinitely many primes p such that p+2 is also a prime, and falls in the the more general Polignac’s Conjecture, which says that for any even n, there are infinitely many prime p such that p+n is also prime. This is another one of those problems that is easy to state but seems tremendously hard to solve. But recently, Dr. Yitang Zhang of the University of New Hampshire has submitted a paper to the Annals of Mathematics (one of the most respected and prestigious journals in the field). The paper is reputedly extremely clear (in contrast to other recent monumental papers in number theory, i.e. the phenomenally technical papers of Mochizuki on the ABC conjecture), and the word on the street is that it went through the entire review process in less than one month. At this time, there is no publicly available preprint, so I have not had a chance to look at the paper. But word is spreading that credible experts have already carefully reviewed the paper and found no serious flaws.

Dr. Zhang’s paper proves that there are infinitely many primes that have a corresponding prime at most 70000000 or so away. And thus in particular there is at least one number k such that there are infinitely many primes such that both p and p+k are prime. I did not think that this was within the reach of current techniques. But it seems that Dr. Zhang built on top of the work of Goldston, Pintz, and Yildirim to get his result. Further, it seems that optimization of the result will occur and the difference will be brought way down from 70000000. However, as indicated by Mark Lewko on MathOverflow, this proof will probably not extend naturally to a proof of the Twin Primes conjecture itself. Optimally, it might prove the p and p+16 – primes conjecture (which is still amazing).

One should look out for his paper in an upcoming issue of the Annals.

2. Goldbach Conjecture

I feel strangely tied to the Goldbach Conjecture, as I get far more traffic, emails, and spam concerning my previous post on an erroneous proof of Goldbach than on any other topic I’ve written about. About a year ago, I wrote briefly about progress that Dr. Harald Helfgott had made towards the 3-Goldbach Conjecture. This conjecture states that every odd integer greater than five can be written as the sum of three primes. (This is another easy to state problem that is not at all easy to approach).

One week ago, Helfgott posted a preprint to the arxiv that claims to complete his previous work and prove 3-Goldbach. Further, he uses the circle method and good old L-functions, so I feel like I should read over it more closely to learn a few things as it’s very close to my field. (Further still, he’s a Brandeis alum, and now that my wife will be a grad student at Brandeis I suppose I should include it in my umbrella of self-association). While I cannot say that I read the paper, understood it, and affirm its correctness, I can say that the method seems right for the task (related to the 10th and most subtle of Scott Aaronson’s list that I love to quote).

An interesting side bit to Helfgott’s proof is that it only works for numbers larger than 10^{30} or so. Fortunately, he’s also given a computer proof for numbers less than than on the arxiv, along with David Platt. 10^{30} is really, really, really big. Even that is a very slick bit.

3. FoM has opened

I care about open access. Fortunately, so do many of the big names. Two of the big attempts to create a good, strong set of open access math journals have just released their first articles. The Forum of Mathematics Sigma and Pi journals have each released a paper on algebraic and complex geometry. And they’re completely open! I don’t know what it takes for a journal to get off the ground, but I know that it starts with people reading its articles. So read up!

The two articles are


and, in Pi


Calculations with a Gauss-type Sum Wednesday, Apr 24 2013 

It’s been a while since I’ve posted – I’m sorry. I’ve been busy, perhaps working on a paper (let’s hope it becomes a paper) and otherwise trying to learn things. This post is very closely related to some computations that have been coming up in what I’m currently looking at (in particular, looking at h-th coefficients of Eisenstein series of half-integral weight). I hope to write a very expository-level article on this project that I’ve been working on, outsourcing but completely providing computations behind the scenes in posts such as this one.

I’d like to add that this post took almost no time to write, as I used some vim macros and latex2wp to automatically convert a segment of something I’d written into wordpress-able html containing the latex. And that’s pretty awesome.

There is a particular calculation that I’ve had to do repeatedly recently, and that I will mention and use again. In an effort to have a readable account of this calculation, I present one such account here. Finally, I cannot help but say that this (and the next few posts, likely) are all joint work with Chan and Mehmet, also from Brown University.

Let us consider the following generalized Gauss Sum:

\displaystyle H_m(c') : = \varepsilon_{c'} \sum_{r_1\bmod c'}\left(\frac{r_1}{c'}\right) e^{2 \pi i m\frac{r_1}{c'}} \ \ \ \ \ (1)

where I let {\left(\frac{\cdot}{\cdot}\right)} be the Legendre Symbol, and there {\varepsilon_k} is the sign of the {k}th Gauss sum, so that it is {1} if {k \equiv 1 \mod 4} and it is {i} if {k \equiv 3 \mod 4}. It is not defined for {k} even.

Lemma 1 {H_m(n)} is multiplicative in {n}.

Proof: Let {n_1,n_2} be two relatively prime integers. Any integer {a \bmod n_1n_2} can be written as {a = b_2n_1 + b_1n_2}, where {b_1} runs through integers {\bmod\, n_1} and {b_2} runs {\bmod\, n_2} with the Chinese Remainder Theorem. Then

\displaystyle H_m(n_1n_2) = \varepsilon_{n_1n_2} \sum_{a \bmod n_1n_2} \left(\frac{a}{n_1n_2}\right) e^{2\pi i m \frac{a}{n_1n_2}}

\displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \sum_{b_2 \bmod n_2} \left(\frac{b_2n_1 +b_1n_2}{n_1n_2}\right) e^{2 \pi im\frac{b_2n_1 +b_1n_2}{n_1n_2}}

\displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \left(\frac{b_2n_1 +b_1n_2}{n_1}\right) e^{2\pi i m \frac{b_1n_2}{n_1n_2}} \sum_{b_2\bmod n_2} \left(\frac{b_2n_1 +b_1n_2}{n_2}\right) e^{2\pi im\frac{b_2n_1}{n_1n_2}}

\displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \left(\frac{b_1n_2}{n_1}\right) e^{2\pi i m \frac{b_1}{n_1}} \sum_{b_2\bmod n_2} \left(\frac{b_2n_1}{n_2}\right) e^{2\pi im\frac{b_2}{n_2}}

\displaystyle = \varepsilon_{n_1n_2}\left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right)\sum_{b_1 \bmod n_1} \left(\frac{b_1}{n_1}\right) e^{2\pi i m \frac{b_1}{n_1}} \sum_{b_2\bmod n_2} \left(\frac{b_2}{n_2}\right) e^{2\pi im\frac{b_2}{n_2}}

\displaystyle = \varepsilon_{n_1n_2} \varepsilon_{n_1}^{-1} \varepsilon_{n_2}^{-1} \left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right) H_m(n_1)H_{m}(n_2)

Using quadratic reciprocity, we see that {\varepsilon_{n_1n_2} \varepsilon_{n_1}^{-1} \varepsilon_{n_2}^{-1} \left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right)= 1}, so that {H_m(n_1n_2) = H_m(n_1)H_m(n_2)}. \Box

Let’s calculate {H_m(p^k)} for prime powers {p^k}. Let {\zeta = e^{2\pi i /p^k}} be a primitive {p^k}th root of unity. First we deal with the case of odd {p}, {p\not |m}. If {k = 1}, we have the typical quadratic Gauss sum multiplied by {\varepsilon _p}

\displaystyle H_m(p) = \varepsilon_p \sum_{a \bmod p} e^{2\pi i m \frac a p}\left(\frac a p\right) = \varepsilon_p \left(\frac m p\right) \varepsilon_p \sqrt p = \left(\frac{-m} p\right) \sqrt p \ \ \ \ \ (2)

For {k > 1}, we will see that {H_m(p^k)} is {0}. We split into cases when {k} is even or odd. If {k} is even, then we are just summing the primitive {p^k}th roots of unity, which is {0}. If {k>1} is odd,

\displaystyle \sum_{a\bmod p^k} \zeta^a \left(\frac a {p^k}\right) = \sum_{a\bmod p^k} \zeta^a \left(\frac{a}{p}\right) = \sum_{b \bmod p}\sum_{c\bmod p^{k-1}} \zeta^{b+pc} \left(\frac b p\right)

\displaystyle = \sum_{b\bmod p} \zeta^b \left(\frac b p\right) \sum_{c\bmod p^{k-1}} \zeta^{pc} = 0, \ \ \ \ \ (3)

since the inner sum is again a sum of roots of unity. Thus

\displaystyle \left(1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} + \frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}} + \cdots \right) =

\displaystyle = \left(1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}}\right)

\displaystyle = \left(1+ \left(\frac {-m(-1)^{k + 1/2}}{p}\right)\frac{1}{p^{2s-\frac12}} \right)

\displaystyle = \left. \left(1-\frac1{p^{4s-1}}\right) \middle/ \left(1- \left(\frac{m(-1)^{k - 1/2}}{p}\right)\frac{1}{p^{2s-\frac12}}\right)\right.

Notice that this matches up with the {p}th part of the Euler product for {\displaystyle \frac{L(2s-\frac12,\left(\frac{m(-1)^{k - 1/2}}{\cdot}\right))}{\zeta(4s-1)}}.

Now consider those odd {p} such that {p\mid m}. Suppose {p^l \parallel m}. Then {e^{2 \pi i m \ p^k} = \zeta^m} is a primitive {p^{k-l}}th root of unity (or {1} if {l \geq k}). If {l \geq k}, then

\displaystyle \sum_{a \bmod p^k} \zeta^{am} \left(\frac{a}{p^k}\right) = \sum_{a \bmod p^k} \left(\frac{a}{p^k}\right) = \begin{cases} 0 &\text{if } 2\not | k \\ \phi(p^k) &\text{if } 2 \mid k \end{cases} \ \ \ \ \ (4)

If {k=l+1} and {k} is odd, then we essentially have a Gauss sum

\displaystyle \sum_{a\bmod p^k} \zeta^{am} \left(\frac{a}{p^k}\right) = \sum_{a\bmod p^k}\zeta^{am} \left(\frac a p\right) =

\displaystyle = \sum_{c\bmod p^{k-1}} \zeta^{pcm} \sum_{b\bmod p} \zeta^{am} \left(\frac b p\right) = p^{k-1}\left(\frac{m/p^l}{p}\right)\varepsilon_p\sqrt p

If {k = l+1} and {k} is even, noting that {\zeta^m} is a {p}th root of unity,

\displaystyle \sum_{a\bmod p^k} \zeta^{am}\left(\frac {a}{p^k}\right) = \sum_{\substack{a\bmod p^k\\(a,p) = 1}} \zeta^{am} =

\displaystyle = \sum_{a\bmod p^k}\zeta^{am} - \sum_{a\bmod p^{k-1}}\zeta^{pam} = 0 - p^{k-1} = -p^l.

If {k>l+1} then the sum will be zero. For {k} even, this follows from the previous case. If {k} is odd,

\displaystyle \sum_{a\bmod p^k} \zeta^{am} \left(\frac a{p^k}\right) = \sum_{b\bmod p}\zeta^{bm} \left(\frac b p \right)\sum_{c\bmod p^{k-1}}\zeta^{pmc}= 0.

Now, consider the Dirichlet series

\displaystyle \sum_{c > 0, \tt odd} \frac{H_m(c)}{c^{2s}} = \prod_{p \neq 2} \left( 1 + \frac{H_m(p)}{p^{2s}} + \frac{H_m(p^2)}{p^{4s}} + \ldots\right).

Let us combine all these facts to construct the {p}th factor of the Dirichlet series in question, for {p} dividing {m}. Assume first that {p^l\parallel m} with {l} even,

\displaystyle 1 + \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} + \frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}}+ \cdots =

\displaystyle = \left( 1+ \varepsilon_{p^2}\frac{\phi(p^2)}{p^{4s}} + \cdots + \varepsilon_{p^l}\frac{\phi(p^l)}{p^{2ls}} + \varepsilon_{p^{l+1}}\frac{\left(\frac{(-1)^{k + 1/2}m/p^l}{p}\right)\varepsilon_p \sqrt p p^l}{p^{2(l+1)s}}\right)

\displaystyle = \left( 1+\frac{\phi(p^2)}{p^{4s}} + \frac{\phi(p^4)}{p^{8s}}+\cdots +\frac{\phi(p^{l})}{p^{2ls}} + \frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)p^{l+\frac12}}{p^{2(l+1)s}}\right)

\displaystyle = \left(1+ \frac{p^2 - p}{p^{4s}} + \cdots + \frac{p^{l}-p^{l-1}}{p^{2ls}} + \frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)p^{l+\frac12}}{p^{2(l+1)s}}\right)

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right)\left(1+\frac{1}{p^{4(s-\frac12)}} +\cdots + \frac{1}{p^{2(l-2)(s-\frac12)}}\right)+

\displaystyle +\frac{p^l}{p^{2ls}} \left(1+ \frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right) \left(1+ \frac{1}{p^{4(s-\frac12)}}+\cdots +\right.

\displaystyle + \left. \frac{1}{p^{2(l-2)(s-\frac12)}} + \frac{1}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1}\right)

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right) \left(\sum_{i=0}^{\lfloor \frac{l-1}{2} \rfloor} \frac{1}{p^{4(s-\frac12)i}} +\frac{1}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1} \right)

because for even {k}, {\varepsilon_{p^k} = 1}, and for odd {k}, {\varepsilon_{p^k} = \varepsilon_p}. Similarly, for {l} odd,

\displaystyle 1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} +\frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}}+ \cdots

\displaystyle = \left( 1+ \varepsilon_{p^2}\frac{\phi(p^2)}{p^{4s}} + \varepsilon_{p^4}\frac{\phi(p^4)}{p^{8s}} + \cdots + \varepsilon_{p^{l-1}}\frac{\phi(p^{l-1})}{p^{2(l-1)s}} + \varepsilon_{p^{l+1}}\frac{- p^l}{p^{2(l+1)s}}\right)\nonumber

\displaystyle = \left( 1+\frac{\phi(p^2)}{p^{4s}} + \frac{\phi(p^4)}{p^{8s}}+\cdots +\frac{\phi(p^{l-1})}{p^{2(l-1)s}} + \frac{-p^{l}}{p^{2(l+1)s}}\right) \nonumber

\displaystyle = \left(1+ \frac{p^2 - p}{p^{4s}} + \frac{p^4-p^3}{p^{8s}} + \cdots + \frac{p^{l-1}-p^{l-2}}{p^{2(l-1)s}} - \frac{p^l}{p^{2(l+1)s}}\right) \nonumber

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right)\left(\sum_{i=0}^{\frac{l-1}{2}} \frac{1}{p^{4(s-\frac12)i}}\right)

Putting this together, we get that

\displaystyle \prod_{p \neq 2} \left(\sum_{k=1}^\infty \frac{H_m(p)}{p^{2ks}}\right) = \frac{L_2(2s-\frac12,\left(\frac{m(-1)^{k - 1/2}}{\cdot}\right))}{\zeta_{2}(4s-1)} \times

\displaystyle \phantom{\sum \sum\sum\sum} \prod_{p^l \parallel m, p\neq 2} \left(\sum_{i=0}^{\lfloor \frac{l-1}{2} \rfloor} \frac{1}{p^{4(s-\frac12)i}} +\frac{\mathbf{1}_{2{\mathbb Z}}(l)}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1}\right) \ \ \ \ \ (5)

Next Page »