In this note, I consider an application of generalized Mobius Inversion to extract information of arithmetical sums with asymptotics of the form \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1 - \epsilon}) for a fixed j and a constant a_1, so that the sum is over both n and k. We will see that \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 - \epsilon}).

For completeness, let’s prove the Mobius Inversion formula. Suppose we have an arithmetic function \alpha(n) that has Dirichlet inverse \alpha^{-1}(n), so that \alpha * \alpha^{-1} (n) = [n = 1] = \begin{cases} 1 & \text{if } n = 1 \\ 0 & \text{else} \end{cases}, where I use [n = 1] to denote the indicator function of the condition n = 1

Then if F(x) and G(x) are complex-valued functions on [1, \infty), then

Mobius Inversion

\displaystyle F(x) = \sum_{n \leq x} \alpha(n) G\left(\frac{x}{n}\right) = \alpha * G(x)
if and only if
\displaystyle G(x) = \sum_{n \leq x} \alpha^{-1}(n) F\left(\frac{x}{n}\right) = \alpha^{-1} * F(x)

Proof:
Suppose that \displaystyle F(x) = \sum_{n \leq x}\alpha(n) G\left(\frac{x}{n}\right). Then \displaystyle \sum_{n \leq x} \alpha^{-1}(n) F\left(\frac{x}{n}\right) = \sum_{n \leq x} \alpha^{-1}(n) \sum_{m \leq x/n} \alpha(m)G\left(\frac{x}{mn}\right) = \displaystyle = \sum_{n \leq x} \sum_{m \leq x/n} \alpha^{-1}(n) \alpha (m) G\left(\frac{x}{mn}\right).

Let’s collect terms. For any given d, the number of times G\left(\frac{x}{d}\right) will occur will be one for every factorization of d (that is, one time for every way of writing mn = d). So reorganizing the sum, we get that it’s equal to \displaystyle \sum_{d \leq x} G\left(\frac{x}{d}\right) \sum_{e | d} \alpha(e)\alpha^{-1}\left( \frac{d}{e}\right) = \sum_{d \leq x} G\left(\frac{x}{d}\right) (\alpha * \alpha^{-1})(d). Since we know that \alpha and \alpha^{-1} are Dirichlet inverses, the only term that survives is when d = 1. So we have, after the dust settles, that our sum is nothing more than G(x). And the converse is the exact same argument. \diamondsuit.

We know some Dirichlet inverses already. If u(n) is the function sending everything to 1, i.e. u(n) \equiv 1, then we know that \displaystyle \mu * u(n) = \sum_{d | n} \mu(d) = [n = 1], where \mu(n) is the Mobius function. This means that \displaystyle \sum_n \frac{u(n)}{n^s} \sum_m \frac{\mu(m)}{m^s} = \zeta(s) \sum \frac{\mu(m)}{m^s} = 1. This also means that we know that \displaystyle \sum \frac{\mu(n)}{n^s} = \frac{1}{\zeta(s)}.

Let me take a brief aside to mention a particular notation I endorse: the Iverson bracket notation, which says that [P] = \begin{cases} 1 & \text{if } P \text{ true}\\ 0 & \text{else} \end{cases}. Why? Well, a while ago, I really liked to use the bolded 1 indicator function, but then I started doing things with modules and representations, and someone bolded 1 became a multiplicative identity. This is done in Artin’s algebra text too, which I like (but did not learn out of). Then I really liked the \chi-style indicator notation, until it became the default letter for group characters. So I’ve turned to the Iverson Bracket, which saves space compared to the other two anyway.

Back to Dirichlet inverses. Some are a bit more challenging to find. We might say, let’s find the Dirichlet inverse of the function [n = a^j] for a fixed j, i.e. the jth-power indicator function. This is one of those things that we’ll be using later, and I’m acting like one of those teachers who just happens to consider the right thing at the right time.

Claim: the Dirichlet inverse of the function [n = a^j] is the function [n = a^j]\mu(a), which through slightly abusive notation is to say the function that is zero on non-jth-powers, and takes the value of \mu(a) when n = a^j.

Possible Motivation of claim: This doesn’t need to have descended as a gift from the gods. Examining [n = a^j] on powers makes it seem like we want $\latex \mu$-like function, and a little computation would give a good suggestion as to what to try.

Proof:
Both sides are 1 on 1. That’s a good start. Note also that both sides are zero on non-j-th-powers, so we only consider jth powers. For some m > 1, consider n = m^j. Then \displaystyle \sum_{d | m^j} [d = a^j]\left[\frac{m^j}{d} = a'^j\right]\mu(a) = \displaystyle \sum_{d^j | m^j} [d^j = a^j]\left[\frac{m^j}{d^j} = a^j\right] \mu(a') = \displaystyle \sum_{d^j | m^j} \mu(d) = \sum_{d|m} \mu(d) = 0 as m > 1. So we are done. \diamondsuit

That’s sort of nice. Let’s go on to the main bit of the day.

Lemma: \displaystyle \sum_{nk^j \leq x} f(n) = \sum_{n \leq x} f * [n = k^j \text{ for some } k](n).

Proof of lemma:
\displaystyle \sum_{nk^j \leq x} f(n) = \sum_{m \leq x} \sum_{nk^j = m} f(n) = \displaystyle \sum_{m \leq x} \sum_{d | m} [d = k^j]f\left(\frac{m}{d}\right) = \sum_{m \leq x} f * [m = k^j](m) \diamondsuit

Proposition: \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 - \epsilon}), where 0 < \epsilon < 1 - \frac{j}{4}.

Proof: We now know that \displaystyle F(x) = \sum_{nk^j \leq x} f(n) = \sum_{n \leq x} f * [n = k^j] = \displaystyle = \sum_{mn \leq x} [m = k^j] f(n) = \sum_{m \leq x} [m = k^j] \sum_{n \leq x/m} f(n), which is of the form \displaystyle F(x) = \sum \alpha (n)G(x/n). So by Mobius inversion, \displaystyle F(x) = \sum_{m \leq x} [m = k^j] \sum_{n \leq x/m} f(n) = a_1x + O(x^{1 - \epsilon}) \iff \displaystyle \iff \sum_{n \leq x} f(n) = \sum_{n \leq x} [n = k^j]\mu(k) F(x/n).

Let’s look at this last sum. \displaystyle \sum_{n \leq x} [n = k^j]\mu(k) F(x/n) = \sum_{n^j \leq x} [n^j = k^j]\mu(n) F(x/n^j) = \displaystyle = \sum_{n^j \leq x} \mu(n) \left( a_1\frac{x}{n^j} + O\left( \frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right)\right) = \displaystyle = \sum_{n^j \leq x} \mu(n)a_1 \frac{x}{n^j} + \sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right) = \displaystyle = a_1 x \sum_{n^j \leq x} \frac{\mu(n)}{n^j} + \sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right).

For the main term, recall that we know that \displaystyle \frac{1}{\zeta(s)} = \sum_n \frac{\mu(n)}{n^s}, so that (asymptotically) we know that \displaystyle \sum_{n^j \leq x} \frac{\mu(n)}{n^j} \to \frac{1}{\zeta(j)}.

For the error term, note that |\mu(n)| \leq 1, so \displaystyle |\sum_{n^j \leq x} \mu(n)O\left( \frac{x^{1 - \epsilon}}{n^{j(1 - \epsilon)}}\right)| \leq \displaystyle \leq O\left( x^{1 - \epsilon} \sum \frac{1}{n^{j(1 - \epsilon)}}\right) = O(x^{1 - \epsilon}), and where the convergence of this list sum is guaranteed by the condition on \epsilon being not too big (in a sense, we can’t maintain arbitrarily small error).

Every step is reversible, so putting it all together we get \displaystyle \sum_{nk^j \leq x} f(n) = a_1x + O(x^{1-\epsilon}) \iff \sum_{n \leq x} f(n) = \frac{a_1x}{\zeta(j)} + O(x^{1 - \epsilon}), as we’d wanted. \diamondsuit

In general, if one has error term O(x^\alpha) for some \alpha<1, then this process will yield an error term O(x^{\alpha'}) with \alpha' \geq \alpha, but it will still be true that \alpha' < 1. I have much more to say about applying Mobius Inversion to asymptotics of this form, and will follow this up in another note.

Advertisements