Notes on the first week (SummerNT) Monday, Jul 1 2013 

We’ve covered a lot of ground this first week! I wanted to provide a written summary, with partial proof, of what we have done so far.

We began by learning about proofs. We talked about direct proofs, inductive proofs, proofs by contradiction, and proofs by using the contrapositive of the statement we want to prove. A proof is a justification and argument based upon certain logical premises (which we call axioms); in contrast to other disciplines, a mathematical proof is completely logical and can be correct or incorrect.

We then established a set of axioms for the integers that would serve as the foundation of our exploration into the (often fantastic yet sometimes frustrating) realm of number theory. In short, the integers are a non-empty set with addition and multiplication [which are both associative, commutative, and have an identity, and which behave as we think they should behave; further, there are additive inverses], a total order [an integer is either bigger than, less than, or equal to any other integer, and it behaves like we think it should under addition and multiplication], and satisfying the deceptively important well ordering principle [every nonempty set of positive integers has a least element].

With this logical framework in place, we really began number theory in earnest. We talked about divisibility [we say that a divides b, written a \mid b, if b = ak for some integer k]. We showed that every number has a prime factorization. To do this, we used the well-ordering principle.

Suppose that not all integers have a prime factorization. Then there must be a smallest integer that does not have a prime factorization: call it n. Then we know that n is either a prime or a composite. If it’s prime, then it has a prime factorization. If it’s composite, then it factors as n = ab with a,b < n. But then we know that each of a, b have prime factorizations since they are less than n. Multiplying them together, we see that n also has a prime factorization after all. \diamondsuit

Our first major result is the following:

There are infinitely many primes

There are many proofs, and we saw 2 of them in class. For posterity, I’ll present three here.

First proof that there are infinitely many primes

Take a finite collection of primes, say p_1, p_2, \ldots, p_k. We will show that there is at least one more prime not mentioned in the collection. To see this, consider the number p_1 p_2 \ldots p_k + 1. We know that this number will factor into primes, but upon division by every prime in our collection, it leaves a remainder of 1. Thus it has at least one prime factor different than every factor in our collection. \diamondsuit

This was a common proof used in class. A pattern also quickly emerges: 2 + 1 = 3, a prime. 2\cdot3 + 1 = 7, a prime. 2 \cdot 3 \cdot 5 + 1 = 31, also a prime. It is always the case that a product of primes plus one is another prime? No, in fact. If you look at 2 \cdot 3 \cdot 5 \cdot 7 \cdot 11 \cdot 13 + 1=30031 = 59\cdot 509, you get a nonprime.

Second proof that there are infinitely many primes

In a similar vein to the first proof, we will show that there is always a prime larger than n for any positive integer n. To see this, consider n! + 1. Upon dividing by any prime less than n, we get a remainder of 1. So all of its prime factors are larger than n, and so there are infinitely many primes. \diamondsuit

I would also like to present one more, which I’ve always liked.

Third proof that there are infinitely many primes

Suppose there are only finitely many primes p_1, \ldots, p_k. Then consider the two numbers n = p_1 \cdot \dots \cdot p_k and n -1. We know that n - 1 has a prime factor, so that it must share a factor P with n since n is the product of all the primes. But then P divides n - (n - 1) = 1, which is nonsense; no prime divides 1. Thus there are infinitely many primes. \diamondsuit

We also looked at modular arithmetic, often called the arithmetic of a clock. When we say that a \equiv b \mod m, we mean to say that m | (b - a), or equivalently that a = b + km for some integer m (can you show these are equivalent?). And we pronounce that statement as ” a is congruent to b mod m.” We played a lot with modular arithmetic: we added, subtracted, and multiplied many times, hopefully enough to build a bit of familiarity with the feel. In most ways, it feels like regular arithmetic. But in some ways, it’s different. Looking at the integers \mod m partitions the integers into a set of equivalence classes, i.e. into sets of integers that are congruent to 0 \mod m, 1 \mod m, \ldots. When we talk about adding or multiplying numbers mod \mod m, we’re really talking about manipulating these equivalence classes. (This isn’t super important to us – just a hint at what’s going on beneath the surface).

We expect that if a \equiv b \mod m, then we would also have ac \equiv bc \mod m for any integer c, and this is true (can you prove this?). But we would also expect that if we had ac \equiv bc \mod m, then we would necessarily have a \equiv b \mod m, i.e. that we can cancel out the same number on each side. And it turns out that’s not the case. For example, 4 \cdot 2 \equiv 4 \cdot 5 \mod 6 (both are 2 \mod 6), but ‘cancelling the fours’ says that 2 \equiv 5 \mod 6 – that’s simply not true. With this example in mind, we went about proving things about modular arithmetic. It’s important to know what one can and can’t do.

One very big and important observation that we noted is that it doesn’t matter what order we operate, as in it doesn’t matter if we multiply an expression out and then ‘mod it’ down, or ‘mod it down’ and then multiply, or if we intermix these operations. Knowing this allows us to simplify expressions like 11^4 \mod 12, since we know 11 \equiv -1 \mod 12, and we know that (-1)^2 \equiv 1 \mod 12, and so 11^4 \equiv (-1)^{2\cdot 2} \equiv 1 \mod 12. If we’d wanted to, we could have multiplied it out and then reduced – the choice is ours!

Amidst our exploration of modular arithmetic, we noticed some patterns. Some numbers  are invertible in the modular sense, while others are not. For example, 5 \cdot 5 \equiv 1 \mod 6, so in that sense, we might think of \frac{1}{5} \equiv 5 \mod 6. More interestingly but in the same vein, \frac{1}{2} \equiv 6 \mod 11 since 2 \cdot 6 \equiv 1 \mod 11. Stated more formally, a number a has a modular inverse a^{-1} \mod m if there is a solution to the modular equation ax \equiv 1 \mod m, in which case that solution is the modular inverse. When does this happen? Are these units special?

Returning to division, we think of the greatest common divisor. I showed you the Euclidean algorithm, and you managed to prove it in class. The Euclidean algorithm produces the greatest common divisor of a and b, and it looks like this (where I assume that b > 1:

b = q_1 a + r_1

a = q_2 r_1 + r_2

r_1 = q_3 r_2 + r_3

\cdots

r_k = q_{k+2}r_{k+1} + r_{k+2}

r_{k+1} = q_{k+3}r_{k+2} + 0

where in each step, we just did regular old division to guarantee a remainder r_i that was less than the divisor. As the divisors become the remainders, this yields a strictly decreasing remainder at each iteration, so it will terminate (in fact, it’s very fast). Further, using the notation from above, I claimed that the gcd of a and b was the last nonzero remainder, in this case r_{k+2}. How did we prove it?

Proof of Euclidean Algorithm

Suppose that d is a common divisor (such as the greatest common divisor) of a and b. Then d divides the left hand side of b - q_1 a = r_1, and thus must also divide the right hand side. So any divisor of a and b is also a divisor of r_1. This carries down the list, so that the gcd of a and b will divide each remainder term. How do we know that the last remainder is exactly the gcd, and no more? The way we proved it in class relied on the observation that r_{k+2} \mid r_{k+1}. But then r_{k+2} divides the right hand side of r_k = q_{k+2} r_{k+1} + r_{k+2}, and so it also divides the left. This also carries up the chain, so that r_{k+2} divides both a and b. So it is itself a divisor, and thus cannot be larger than the greatest common divisor. \diamondsuit

As an aside, I really liked the way it was proved in class. Great job!

The Euclidean algorithm can be turned backwards with back-substitution (some call this the extended Euclidean algorithm,) to give a solution in x,y to the equation ax + by = \gcd(a,b). This has played a super important role in our class ever since. By the way, though I never said it in class, we proved Bezout’s Identity along the way (which we just called part of the Extended Euclidean Algorithm). This essentially says that the gcd of a and b is the smallest number expressible in the form ax + by. The Euclidean algorithm has shown us that the gcd is expressible in this form. How do we know it’s the smallest? Observe again that if c is a common divisor of a and b, then c divides the left hand side of ax + by = d, and so c \mid d. So d cannot be smaller than the gcd.

This led us to explore and solve linear Diophantine equations of the form ax + by = c for general a,b,c. There will be solutions whenever the \gcd(a,b) \mid c, and in such cases there are infinitely many solutions (Do you remember how to see infinitely many other solutions?).

Linear Diophantine equations are very closely related a linear problems in modular arithmetic of the form ax \equiv c \mod m. In particular, this last modular equation is equivalent to ax + my = c for some y.(Can you show that these are the same?). Using what we’ve learned about linear Diophantine equations, we know that ax \equiv c \mod m has a solution iff \gcd(a,m) \mid c. But now, there are not infinitely many incongruent (i.e. not the same \mod m) solutions. This is called the ‘Linear Congruence Theorem,’ and is interestingly the first major result we’ve learned with no proof on wikipedia.

Theorem: the modular equation ax \equiv b \mod m has a solution iff \gcd(a,m) \mid b, in which case there are exactly \gcd(a,m) incongruent solutions.

Proof

We can translate a solution of ax \equiv b \mod m into a solution of ax + my = b, and vice-versa. So we know from the Extended Euclidean algorithm that there are only solutions if \gcd(a,m) \mid b. Now, let’s show that there are \gcd(a,m) solutions. I will do this a bit differently than how we did it in class.

First, let’s do the case when gcd(a,m)=1, and suppose we have a solution (x,y) so that ax + my = b. If there is another solution, then there is some perturbation we can do by shifting x by a number x' and y by a number y' that yields another solution looking like a(x + x') + m(y + y') = b. As we already know that ax + my = b, we can remove that from the equation. Then we get simply ax' = -my'. Since \gcd(m,a) = 1, we know (see below the proof) that m divides x'. But then the new solution x + x' \equiv x \mod m, so all solutions fall in the same congruence class – the same as x.

Now suppose that gcd(a,m) = d and that there is a solution. Since there is a solution, each of a,m, and b are divisible by d, and we can write them as a = da', b = db', m = dm'. Then the modular equation ax \equiv b \mod m is the same as da' x \equiv d b' \mod d m', which is the same as d m' \mid (d b' - d a'x). Note that in this last case, we can remove the d from both sides, so that m' \mid b' - a'x, or that a'x \equiv b \mod m'. From the first case, we know this has exactly one solution mod m', but we are interested in solutions mod m. Just as knowing that x \equiv 2 \mod 4 means that x might be 2, 6, 10 \mod 12 since 4 goes into 12 three times, m' goes into m d times, and this gives us our d incongruent solutions. \diamondsuit.

I mentioned that we used the fact that we’ve proven 3 times in class now in different forms: if \gcd(a,b) = 1 and a \mid bc, then we can conclude that a \mid c. Can you prove this? Can you prove this without using unique factorization? We actually used this fact to prove unique factorization (really we use the statement about primes: if p is a prime and p \mid ab, then we must have that p \mid a or p \mid b, or perhaps both). Do you remember how we proved that? We used the well-ordered principle to say that if there were a positive integer that couldn’t be uniquely factored, then there is a smaller one. But choosing two of its factorizations, and finding a prime on one side – we concluded that this prime divided the other side. Dividing both sides by this prime yielded a smaller (and therefore unique by assumption) factorization. This was the gist of the argument.

The last major bit of the week was the Chinese Remainder Theorem, which is awesome enough (and which I have enough to say about) that it will get its own post – which I’m working on now.

I’ll see you all in class tomorrow.

A proof from the first sheet (SummerNT) Monday, Jun 24 2013 

In class today, we were asked to explain what was wrong with the following proof:

Claim: As x increases, the function

\displaystyle f(x)=\frac{100x^2+x^2\sin(1/x)+50000}{100x^2}

approaches (gets arbitrarily close to) 1.

Proof: Look at values of f(x) as x gets larger and larger.

f(5) \approx 21.002
f(10)\approx 6.0010
f(25)\approx 1.8004
f(50)\approx 1.2002
f(100) \approx 1.0501
f(500) \approx 1.0020

These values are clearly getting closer to 1. QED

Of course, this is incorrect. Choosing a couple of numbers and thinking there might be a pattern does not constitute a proof.

But on a related note, these sorts of questions (where you observe a pattern and seek to prove it) can sometimes lead to strongly suspected conjectures, which may or may not be true. Here’s an interesting one (with a good picture over at SpikedMath):

Draw 2 points on the circumference of a circle, and connect them with a line. How many regions is the circle divided into? (two). Draw another point, and connect it to the previous points with a line. How many regions are there now? Draw another point, connecting to the previous points with lines. How many regions now? Do this once more. Do you see the pattern? You might even begin to formulate a belief as to why it’s true.

But then draw one more point and its lines, and carefully count the number of regions formed in the circle. How many circles now? (It doesn’t fit the obvious pattern).

So we know that the presented proof is incorrect. But lets say we want to know if the statement is true. How can we prove it? Further, we want to prove it without calculus – we are interested in an elementary proof. How should we proceed?

Firstly, we should say something about radians. Recall that at an angle \theta (in radians) on the unit circle, the arc-length subtended by the angle \theta is exactly \theta (in fact, this is the defining attribute of radians). And the value \sin \theta is exactly the height, or rather the y value, of the part of the unit circle at angle \theta. It’s annoying to phrase, so we look for clarification at the hastily drawn math below:

Screenshot from 2013-06-24 12:30:53

The arc length subtended by theta has length theta. The value of sin theta is the length of the vertical line in black.

Note in particular that the arc length is longer than the value of \sin \theta, so that \sin \theta < \theta. (This relies critically on the fact that the angle is positive). Further, we see that this is always true for small, positive \theta. So it will be true that for large, positive x, we’ll have \sin \frac{1}{x} < \frac{1}{x}. For those of you who know a bit more calculus, you might know that in fact, \sin(\frac{1}{x}) = \frac{1}{x} - \frac{1}{x^33!} + O(\frac{1}{t^5}), which is a more precise statement.

What do we do with this? Well, I say that this allow us to finish the proof.

\dfrac{100x^2 + x^2 \sin(1/x) + 50000}{100x^2} \leq \dfrac{100x^2 + x + 50000}{100x^2} = 1 + \dfrac{1}{100x} + \dfrac{50000}{100x^2}

, and it is clear that the last two terms go to zero as x increases. \spadesuit

Finally, I’d like to remind you about the class webpage at the left – I’ll see you tomorrow in class.

Recent developments in Twin Primes, Goldbach, and Open Access Tuesday, May 21 2013 

It has been a busy two weeks all over the math community. Well, at least it seemed so to me. Some of my friends have defended their theses and need only to walk to receive their PhDs; I completed my topics examination, Brown’s take on an oral examination; and I’ve given a trio of math talks.

Meanwhile, there have been developments in a relative of the Twin Primes conjecture, the Goldbach conjecture, and Open Access math journals.

1. Twin Primes Conjecture

The Twin Primes Conjecture states that there are infinitely many primes p such that p+2 is also a prime, and falls in the the more general Polignac’s Conjecture, which says that for any even n, there are infinitely many prime p such that p+n is also prime. This is another one of those problems that is easy to state but seems tremendously hard to solve. But recently, Dr. Yitang Zhang of the University of New Hampshire has submitted a paper to the Annals of Mathematics (one of the most respected and prestigious journals in the field). The paper is reputedly extremely clear (in contrast to other recent monumental papers in number theory, i.e. the phenomenally technical papers of Mochizuki on the ABC conjecture), and the word on the street is that it went through the entire review process in less than one month. At this time, there is no publicly available preprint, so I have not had a chance to look at the paper. But word is spreading that credible experts have already carefully reviewed the paper and found no serious flaws.

Dr. Zhang’s paper proves that there are infinitely many primes that have a corresponding prime at most 70000000 or so away. And thus in particular there is at least one number k such that there are infinitely many primes such that both p and p+k are prime. I did not think that this was within the reach of current techniques. But it seems that Dr. Zhang built on top of the work of Goldston, Pintz, and Yildirim to get his result. Further, it seems that optimization of the result will occur and the difference will be brought way down from 70000000. However, as indicated by Mark Lewko on MathOverflow, this proof will probably not extend naturally to a proof of the Twin Primes conjecture itself. Optimally, it might prove the p and p+16 – primes conjecture (which is still amazing).

One should look out for his paper in an upcoming issue of the Annals.

2. Goldbach Conjecture

I feel strangely tied to the Goldbach Conjecture, as I get far more traffic, emails, and spam concerning my previous post on an erroneous proof of Goldbach than on any other topic I’ve written about. About a year ago, I wrote briefly about progress that Dr. Harald Helfgott had made towards the 3-Goldbach Conjecture. This conjecture states that every odd integer greater than five can be written as the sum of three primes. (This is another easy to state problem that is not at all easy to approach).

One week ago, Helfgott posted a preprint to the arxiv that claims to complete his previous work and prove 3-Goldbach. Further, he uses the circle method and good old L-functions, so I feel like I should read over it more closely to learn a few things as it’s very close to my field. (Further still, he’s a Brandeis alum, and now that my wife will be a grad student at Brandeis I suppose I should include it in my umbrella of self-association). While I cannot say that I read the paper, understood it, and affirm its correctness, I can say that the method seems right for the task (related to the 10th and most subtle of Scott Aaronson’s list that I love to quote).

An interesting side bit to Helfgott’s proof is that it only works for numbers larger than 10^{30} or so. Fortunately, he’s also given a computer proof for numbers less than than on the arxiv, along with David Platt. 10^{30} is really, really, really big. Even that is a very slick bit.

3. FoM has opened

I care about open access. Fortunately, so do many of the big names. Two of the big attempts to create a good, strong set of open access math journals have just released their first articles. The Forum of Mathematics Sigma and Pi journals have each released a paper on algebraic and complex geometry. And they’re completely open! I don’t know what it takes for a journal to get off the ground, but I know that it starts with people reading its articles. So read up!

The two articles are

GENERIC VANISHING THEORY VIA MIXED HODGE MODULES

and, in Pi

$p$-ADIC HODGE THEORY FOR RIGID-ANALYTIC VARIETIES

Calculations with a Gauss-type Sum Wednesday, Apr 24 2013 

It’s been a while since I’ve posted – I’m sorry. I’ve been busy, perhaps working on a paper (let’s hope it becomes a paper) and otherwise trying to learn things. This post is very closely related to some computations that have been coming up in what I’m currently looking at (in particular, looking at h-th coefficients of Eisenstein series of half-integral weight). I hope to write a very expository-level article on this project that I’ve been working on, outsourcing but completely providing computations behind the scenes in posts such as this one.

I’d like to add that this post took almost no time to write, as I used some vim macros and latex2wp to automatically convert a segment of something I’d written into wordpress-able html containing the latex. And that’s pretty awesome.

There is a particular calculation that I’ve had to do repeatedly recently, and that I will mention and use again. In an effort to have a readable account of this calculation, I present one such account here. Finally, I cannot help but say that this (and the next few posts, likely) are all joint work with Chan and Mehmet, also from Brown University.

Let us consider the following generalized Gauss Sum:

\displaystyle H_m(c') : = \varepsilon_{c'} \sum_{r_1\bmod c'}\left(\frac{r_1}{c'}\right) e^{2 \pi i m\frac{r_1}{c'}} \ \ \ \ \ (1)

where I let {\left(\frac{\cdot}{\cdot}\right)} be the Legendre Symbol, and there {\varepsilon_k} is the sign of the {k}th Gauss sum, so that it is {1} if {k \equiv 1 \mod 4} and it is {i} if {k \equiv 3 \mod 4}. It is not defined for {k} even.

Lemma 1 {H_m(n)} is multiplicative in {n}.

Proof: Let {n_1,n_2} be two relatively prime integers. Any integer {a \bmod n_1n_2} can be written as {a = b_2n_1 + b_1n_2}, where {b_1} runs through integers {\bmod\, n_1} and {b_2} runs {\bmod\, n_2} with the Chinese Remainder Theorem. Then

\displaystyle H_m(n_1n_2) = \varepsilon_{n_1n_2} \sum_{a \bmod n_1n_2} \left(\frac{a}{n_1n_2}\right) e^{2\pi i m \frac{a}{n_1n_2}}

\displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \sum_{b_2 \bmod n_2} \left(\frac{b_2n_1 +b_1n_2}{n_1n_2}\right) e^{2 \pi im\frac{b_2n_1 +b_1n_2}{n_1n_2}}

\displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \left(\frac{b_2n_1 +b_1n_2}{n_1}\right) e^{2\pi i m \frac{b_1n_2}{n_1n_2}} \sum_{b_2\bmod n_2} \left(\frac{b_2n_1 +b_1n_2}{n_2}\right) e^{2\pi im\frac{b_2n_1}{n_1n_2}}

\displaystyle = \varepsilon_{n_1n_2} \sum_{b_1 \bmod n_1} \left(\frac{b_1n_2}{n_1}\right) e^{2\pi i m \frac{b_1}{n_1}} \sum_{b_2\bmod n_2} \left(\frac{b_2n_1}{n_2}\right) e^{2\pi im\frac{b_2}{n_2}}

\displaystyle = \varepsilon_{n_1n_2}\left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right)\sum_{b_1 \bmod n_1} \left(\frac{b_1}{n_1}\right) e^{2\pi i m \frac{b_1}{n_1}} \sum_{b_2\bmod n_2} \left(\frac{b_2}{n_2}\right) e^{2\pi im\frac{b_2}{n_2}}

\displaystyle = \varepsilon_{n_1n_2} \varepsilon_{n_1}^{-1} \varepsilon_{n_2}^{-1} \left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right) H_m(n_1)H_{m}(n_2)

Using quadratic reciprocity, we see that {\varepsilon_{n_1n_2} \varepsilon_{n_1}^{-1} \varepsilon_{n_2}^{-1} \left(\frac{n_2}{n_1}\right)\left(\frac{n_1}{n_2}\right)= 1}, so that {H_m(n_1n_2) = H_m(n_1)H_m(n_2)}. \Box

Let’s calculate {H_m(p^k)} for prime powers {p^k}. Let {\zeta = e^{2\pi i /p^k}} be a primitive {p^k}th root of unity. First we deal with the case of odd {p}, {p\not |m}. If {k = 1}, we have the typical quadratic Gauss sum multiplied by {\varepsilon _p}

\displaystyle H_m(p) = \varepsilon_p \sum_{a \bmod p} e^{2\pi i m \frac a p}\left(\frac a p\right) = \varepsilon_p \left(\frac m p\right) \varepsilon_p \sqrt p = \left(\frac{-m} p\right) \sqrt p \ \ \ \ \ (2)

For {k > 1}, we will see that {H_m(p^k)} is {0}. We split into cases when {k} is even or odd. If {k} is even, then we are just summing the primitive {p^k}th roots of unity, which is {0}. If {k>1} is odd,

\displaystyle \sum_{a\bmod p^k} \zeta^a \left(\frac a {p^k}\right) = \sum_{a\bmod p^k} \zeta^a \left(\frac{a}{p}\right) = \sum_{b \bmod p}\sum_{c\bmod p^{k-1}} \zeta^{b+pc} \left(\frac b p\right)

\displaystyle = \sum_{b\bmod p} \zeta^b \left(\frac b p\right) \sum_{c\bmod p^{k-1}} \zeta^{pc} = 0, \ \ \ \ \ (3)

since the inner sum is again a sum of roots of unity. Thus

\displaystyle \left(1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} + \frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}} + \cdots \right) =

\displaystyle = \left(1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}}\right)

\displaystyle = \left(1+ \left(\frac {-m(-1)^{k + 1/2}}{p}\right)\frac{1}{p^{2s-\frac12}} \right)

\displaystyle = \left. \left(1-\frac1{p^{4s-1}}\right) \middle/ \left(1- \left(\frac{m(-1)^{k - 1/2}}{p}\right)\frac{1}{p^{2s-\frac12}}\right)\right.

Notice that this matches up with the {p}th part of the Euler product for {\displaystyle \frac{L(2s-\frac12,\left(\frac{m(-1)^{k - 1/2}}{\cdot}\right))}{\zeta(4s-1)}}.

Now consider those odd {p} such that {p\mid m}. Suppose {p^l \parallel m}. Then {e^{2 \pi i m \ p^k} = \zeta^m} is a primitive {p^{k-l}}th root of unity (or {1} if {l \geq k}). If {l \geq k}, then

\displaystyle \sum_{a \bmod p^k} \zeta^{am} \left(\frac{a}{p^k}\right) = \sum_{a \bmod p^k} \left(\frac{a}{p^k}\right) = \begin{cases} 0 &\text{if } 2\not | k \\ \phi(p^k) &\text{if } 2 \mid k \end{cases} \ \ \ \ \ (4)

If {k=l+1} and {k} is odd, then we essentially have a Gauss sum

\displaystyle \sum_{a\bmod p^k} \zeta^{am} \left(\frac{a}{p^k}\right) = \sum_{a\bmod p^k}\zeta^{am} \left(\frac a p\right) =

\displaystyle = \sum_{c\bmod p^{k-1}} \zeta^{pcm} \sum_{b\bmod p} \zeta^{am} \left(\frac b p\right) = p^{k-1}\left(\frac{m/p^l}{p}\right)\varepsilon_p\sqrt p

If {k = l+1} and {k} is even, noting that {\zeta^m} is a {p}th root of unity,

\displaystyle \sum_{a\bmod p^k} \zeta^{am}\left(\frac {a}{p^k}\right) = \sum_{\substack{a\bmod p^k\\(a,p) = 1}} \zeta^{am} =

\displaystyle = \sum_{a\bmod p^k}\zeta^{am} - \sum_{a\bmod p^{k-1}}\zeta^{pam} = 0 - p^{k-1} = -p^l.

If {k>l+1} then the sum will be zero. For {k} even, this follows from the previous case. If {k} is odd,

\displaystyle \sum_{a\bmod p^k} \zeta^{am} \left(\frac a{p^k}\right) = \sum_{b\bmod p}\zeta^{bm} \left(\frac b p \right)\sum_{c\bmod p^{k-1}}\zeta^{pmc}= 0.

Now, consider the Dirichlet series

\displaystyle \sum_{c > 0, \tt odd} \frac{H_m(c)}{c^{2s}} = \prod_{p \neq 2} \left( 1 + \frac{H_m(p)}{p^{2s}} + \frac{H_m(p^2)}{p^{4s}} + \ldots\right).

Let us combine all these facts to construct the {p}th factor of the Dirichlet series in question, for {p} dividing {m}. Assume first that {p^l\parallel m} with {l} even,

\displaystyle 1 + \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} + \frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}}+ \cdots =

\displaystyle = \left( 1+ \varepsilon_{p^2}\frac{\phi(p^2)}{p^{4s}} + \cdots + \varepsilon_{p^l}\frac{\phi(p^l)}{p^{2ls}} + \varepsilon_{p^{l+1}}\frac{\left(\frac{(-1)^{k + 1/2}m/p^l}{p}\right)\varepsilon_p \sqrt p p^l}{p^{2(l+1)s}}\right)

\displaystyle = \left( 1+\frac{\phi(p^2)}{p^{4s}} + \frac{\phi(p^4)}{p^{8s}}+\cdots +\frac{\phi(p^{l})}{p^{2ls}} + \frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)p^{l+\frac12}}{p^{2(l+1)s}}\right)

\displaystyle = \left(1+ \frac{p^2 - p}{p^{4s}} + \cdots + \frac{p^{l}-p^{l-1}}{p^{2ls}} + \frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)p^{l+\frac12}}{p^{2(l+1)s}}\right)

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right)\left(1+\frac{1}{p^{4(s-\frac12)}} +\cdots + \frac{1}{p^{2(l-2)(s-\frac12)}}\right)+

\displaystyle +\frac{p^l}{p^{2ls}} \left(1+ \frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right) \left(1+ \frac{1}{p^{4(s-\frac12)}}+\cdots +\right.

\displaystyle + \left. \frac{1}{p^{2(l-2)(s-\frac12)}} + \frac{1}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1}\right)

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right) \left(\sum_{i=0}^{\lfloor \frac{l-1}{2} \rfloor} \frac{1}{p^{4(s-\frac12)i}} +\frac{1}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1} \right)

because for even {k}, {\varepsilon_{p^k} = 1}, and for odd {k}, {\varepsilon_{p^k} = \varepsilon_p}. Similarly, for {l} odd,

\displaystyle 1+ \frac{\left(\frac{-1^{k + 1/2}}{p}\right)H_m(p)}{p^{2s}} +\frac{\left(\frac{-1^{k + 1/2}}{p^2}\right)H_m(p^2)}{p^{4s}}+ \cdots

\displaystyle = \left( 1+ \varepsilon_{p^2}\frac{\phi(p^2)}{p^{4s}} + \varepsilon_{p^4}\frac{\phi(p^4)}{p^{8s}} + \cdots + \varepsilon_{p^{l-1}}\frac{\phi(p^{l-1})}{p^{2(l-1)s}} + \varepsilon_{p^{l+1}}\frac{- p^l}{p^{2(l+1)s}}\right)\nonumber

\displaystyle = \left( 1+\frac{\phi(p^2)}{p^{4s}} + \frac{\phi(p^4)}{p^{8s}}+\cdots +\frac{\phi(p^{l-1})}{p^{2(l-1)s}} + \frac{-p^{l}}{p^{2(l+1)s}}\right) \nonumber

\displaystyle = \left(1+ \frac{p^2 - p}{p^{4s}} + \frac{p^4-p^3}{p^{8s}} + \cdots + \frac{p^{l-1}-p^{l-2}}{p^{2(l-1)s}} - \frac{p^l}{p^{2(l+1)s}}\right) \nonumber

\displaystyle = \left(1-\frac{1}{p^{4s-1}}\right)\left(\sum_{i=0}^{\frac{l-1}{2}} \frac{1}{p^{4(s-\frac12)i}}\right)

Putting this together, we get that

\displaystyle \prod_{p \neq 2} \left(\sum_{k=1}^\infty \frac{H_m(p)}{p^{2ks}}\right) = \frac{L_2(2s-\frac12,\left(\frac{m(-1)^{k - 1/2}}{\cdot}\right))}{\zeta_{2}(4s-1)} \times

\displaystyle \phantom{\sum \sum\sum\sum} \prod_{p^l \parallel m, p\neq 2} \left(\sum_{i=0}^{\lfloor \frac{l-1}{2} \rfloor} \frac{1}{p^{4(s-\frac12)i}} +\frac{\mathbf{1}_{2{\mathbb Z}}(l)}{p^{2l(s-\frac12)}}\left(1-\frac{\left(\frac{(-1)^{k - 1/2}m/p^l}{p}\right)}{p^{2s-\frac12}}\right)^{-1}\right) \ \ \ \ \ (5)

A Book Review of Count Down: The Race for Beautiful Solutions at the IMO Saturday, Feb 16 2013 

I read a lot of popular science and math books. Scientific and mathematical exposition to the public is a fundamental task that must be done; but for some reason, it is simply not getting done well enough. One day, perhaps I’ll write expository (i.e. for non-math folk) math. But until then, I read everything I can. I then thought that if I read them all, I should share what I think.

Today, I consider the book Count Down: The Race for Beautiful Solutions at the International Mathematics Olympiad, by Steve Olson. CDThe review itself can be found after the fold (more…)

Hurwitz Zeta is a sum of Dirichlet L Functions, and vice-versa Friday, Feb 8 2013 

At least three times now, I have needed to use that Hurwitz Zeta functions are a sum of L-functions and its converse, only to have forgotten how it goes. And unfortunately, the current wikipedia article on the Hurwitz Zeta function has a mistake, omitting the $varphi$ term (although it will soon be corrected). Instead of re-doing it each time, I write this detail here, below the fold.
(more…)

Math journals and the fight over open access Friday, Jan 18 2013 

Some may have heard me talk about this before, but I’ve caught the open source bug. At least, I’ve caught the collaboration and free-dissemination bug. And I don’t just mean software – there’s much more to open source than software (even though the term open source originated in reference to free access to source code). I use open source to refer to the idea that when someone consumes a product, they should have access to the design and details of implementation, and should be able to freely distribute the product whenever this is possible. In some ways, I’m still learning. For example, though I use linux, I do not know enough about coding to contribute actual code to the linux/unix community. But I know just enough python to contribute to Sage, and do. And I’m getting better.

I also believe in open access, which feels like a natural extension. By open access, I mean free access to peer-reviewed scholarly journals and other materials. It stuns me that the public does not generally have access to publicly-funded research. How is this acceptable? Another thing that really gets to me is how selling overpriced and overlarge calculus textbooks can allow the author to do things like spend 30+ million dollars on his home? This should not happen. At least, it shouldn’t happen now, in the internet age. All the material is freely available in at least as good of a presentation, so the cost of the textbook is a compilation cost (not worth over $100). But these books are printed oversize, 1000+ pages, in full color and on 60-pound paper. That’s a recipe for high cost! It’s tremendously unfortunate, as it’s not as though the students even have a choice over what book they buy. But this is not the argument I want to make today, and I digress.

Recently, I was dragged down a rabbit hole. And what I saw when I emerged on the other side made me learn about a side of math journals I’d never seen before, and the fight over open access. I’d like to comment on this today – that’s after the fold.

(more…)

Are the calculus MOOCs any good: After week 1 Saturday, Jan 12 2013 

This is a continuation of a previous post.

I’ve been following the two Coursera calculus MOOCs: the elementary introductory to calculus being taught by Dr. Fowler of Ohio State University, and a course designed around Taylor expansions taught by Dr. Ghrist of UPenn, meant to be taken after an introductory calculus course. I’ve completed the ‘first week’ of Dr. Fowler’s course (there are 15 total), and the ‘first unit’ of Dr. Ghrist’s course (there are 5 total), and I have a few things to say – after the fold.

(more…)

Are the calculus MOOCs any good? Tuesday, Jan 8 2013 

I like the idea of massive online collaboration in math. For example, I am a big supporter of the ideas of the polymath projects. I contribute to wikis and to Sage (which I highly recommend to everyone as an alternative to the M’s: Maple, Mathematica, MatLab, Magma). Now, there are MOOCs (Massice open online courses) in many subjects, but in particular there are a growing number of math MOOCs (a more or less complete list of MOOCs can be found here). The idea of a MOOC is to give people all over the world the opportunity to a good, diverse, and free education.

I’ve looked at a few MOOCs in the past. I’ve taken a few Coursera and Udacity courses, and I have mixed reviews. Actually, I’ve been very impressed with the Udacity courses I’ve taken. They have a good polish. But there are only a couple dozen – it takes time to get quality. There are hundreds of Coursera courses, though there is some overlap. But I’ve been pretty unimpressed with most of them.

But there are two calculus courses being offered this semester (right now) through Coursera. I’ve been a teaching assistant for calculus many times, and there are things that I like and others that I don’t like about my past experiences. Perhaps the different perspective from a MOOC will lead to a better form of calculus instruction?

There will be no teaching assistant led recitation sections, as the ‘standard university model’ might suggest. Will there be textbooks? In both, there are textbooks, or at least lecture notes (I’m not certain of their format yet). And there will be lectures. But due to the sheer size of the class, it’s much more challenging for the instructors to answer individual students’ questions. There is a discussion forum which essentially means that students get to help each other (I suppose that people like me, who know calculus, can also help people through the discussion forums too). So in a few ways, this turns what I have come to think of as the traditional model of calculus instruction on its head.

And this might be a good thing! (Or it might not!) Intro calculus instruction has not really changed much in decades, since before the advent of computers and handheld calculators. It would make sense that new tools might mean that teaching methods should change. But I don’t know yet.

So I’ll be looking at the two courses this semester. The first is being offered by Dr. Jim Fowler and is associated with Ohio State University. It’s an introductory-calculus course. The second is being offered by Dr. Robert Ghrist and is associated with the University of Pennsylvania. It’s sort of a funny class – it’s designed for people who already know some calculus. In particular, students should know what derivatives and integrals are. There is a diagnostic test that involves taking a limit, computing some derivatives, and computing an integral (and some precalculus problems as well). Dr. Ghrist says that his course assumes that students have taken a high school AP Calculus AB course or the equivalent. So it’s not quite fair to compare the two classes, as they’re not on equal footing.

But I can certainly see what I think of the MOOC model for Calculus instruction.

Math 90: Concluding Remarks Sunday, Dec 30 2012 

All is said and done with Math 90 for 2012, and the year is coming to a close. I wanted to take this moment to write a few things about the course, what seemed to go well and what didn’t, and certain trends in the course. that I think are interesting and illustrative.

First, we might just say some of the numbers. Math 90 is offered only as pass/fail, with the possibility of ‘passing with distinction’ if you did exceptionally well (I’ll say what that meant here, though who knows what it means in general). We had four people fail, three people ‘pass with distinction,’ and everyone else got a passing mark. Everything else will be after the fold.

(more…)

« Previous PageNext Page »