# The value of the trigonometric harmonic series revisited

Shortly after my last post I realized there was a simpler way of determining the exact value of the series $\sum_{n=1}^\infty\cos n/n$. Instead of following the method I previously described which required the intricate analysis of some integrals, one can simply use the formula

$\sum_{n=1}^\infty\frac{a^n}{n} = -\ln(1-a)$

which is valid for $a\in\mathbb{C}$ which satisfies $\lvert a\rvert\leq 1$ and $a\neq 1$. This comes from a simple rewriting of the so-called Mercator series (replace $x$ with $-x$ in the Taylor series of $\ln(1+x)$ and then take the negative).

Then we have

\begin{align*}
\sum_{n=1}^\infty\frac{\cos n}{n} &= \sum_{n=1}^\infty\frac{e^{in}+e^{-in}}{2n} \\
&= -\bigl(\ln(1-e^i)+\ln(1-e^{-i})\bigr)/2 \\
&= -\ln\bigl((1-e^i)(1-e^{-i})\bigr)/2 \\
&= -\ln(2-e^i-e^{-i})/2 \\
&= -\ln(2-2\cos1)/2 \\
&\approx 0.0420195
\end{align*}

since $\lvert e^i\rvert=\lvert e^{-i}\rvert=1$, but $e^i\neq1$ and $e^{-i}\neq1$.

# The value of the trigonometric harmonic series

I’ve previously discussed various aspects of the “trigonometric harmonic series” $\sum_{n=1}^\infty\cos n/n$, and in particular showed that the series is conditionally convergent. However, we haven’t found the actual value it converges to; our argument only shows that the value must be smaller than about $2.54$ in absolute value. In this post, I’ll give a closed-form expression for the exact value that the series converges to.

# The names in boxes puzzle

This is one of the best puzzles I’ve come across:

100 prisoners have their names placed in 100 boxes so that each box contains exactly one name. Each prisoner is permitted to look inside 50 boxes of their choice, but is not allowed any communication with the other prisoners. What strategy maximizes the probability that every prisoner finds their own name?

I heard about this puzzle years ago, spent several days thinking about it, and never quite solved it. Actually, I did think of a strategy in which they would succeed with probability over 30% (!), which was the unbelievably-high success rate quoted in the puzzle as I heard it posed. However, I ended up discarding the strategy, as I didn’t think it could possibly work (and probably wouldn’t have been able to prove it would work in any case).

# A difference of squared sines

The purpose of this post is to show the interesting identity

$\sin(x)^2-\sin(y)^2 = \sin(x+y)\sin(x-y) .$

# A sum of sines

In this post I want to prove a lemma which gives a closed-form expression for the summation $\sum_{n=1}^m\sin(nx)$. The method of proof has come up before; it uses basic algebra, the complex exponential expression for sine, and the summation of a geometric series formula.

# Revisiting a lemma

We’ve discussed before the “trigonometric harmonic” series $\sum_{n=1}^\infty\cos n/n$. In particular, we showed that the series converges (conditionally). The argument involved the partial sums of the sequence $\{\cos n\}_{n=1}^\infty$, and we denoted these by $C(m)$. The closed-form expression we found for $C(m)$ involved the quantity $\cos m-\cos(m+1)$; in this post we show that this expression can also be written in the alternative form $2\sin(1/2)\sin(m+1/2)$.

# That harmonic series variant absolutely

Previously I discussed a variant on the harmonic series, $\sum_{n=1}^\infty\cos n/n$. Last time we showed that

$\sum_{n=1}^\infty\frac{\cos n}{n} = \sum_{n=1}^\infty\sum_{m=1}^n\cos m\Bigl(\frac{1}{n}-\frac{1}{n+1}\Bigr) ,$

and then showed that the series on the right converges absolutely, by comparison with the series $\sum_{n=1}^\infty3/n^2$.

Since the series on the right converges and the two series have the same value, the series on the left also converges. However, this does not imply that the series on the left also converges absolutely. As a trivial counterexample, if a conditionally convergent series sums to $c$ then $c\sum_{n=1}^\infty \href{http://en.wikipedia.org/wiki/Kronecker_delta}{\delta_{n,1}}$ is an absolutely convergent series which sums to the same value. 🙂

In this post, we answer the question of whether $\sum_{n=1}^\infty\cos n/n$ converges absolutely or not.

# The infinite hat problem

The infinite hat problem is a great puzzle. If you have a strong math background, you should try solving it before reading my solution below!

Here’s my strategy for the wizards: first, they agree on an ordering of themselves. Each wizard can be indexed by a natural number, since there are countably many of them. They then consider the set of all possible hat configurations $S$ with respect to that ordering. By the well-ordering theorem (which is equivalent to the axiom of choice) a well-ordering of $S$ exists; the wizards also agree on a specific well-ordering.

Note that this step is non-constructive because it relies on the axiom of choice. That is, such a well-ordering exists but there may not be a way to explicitly construct it. The point of the note about assuming the axiom of choice was a tip-off that the wizards need to make their decision based off of a set whose existence is only ensured by the axiom of choice.

Once the well-ordering has been chosen the wizards are ready to receive their hats. Once they are able to see everyone else’s hat, they each construct a subset $T$ of $S$ which contains the hat configurations which differ from the configuration they can see in only finitely many hats. The lack of knowledge about a wizard’s own hat is irrelevant to this construction, since that lack of knowledge only changes the configuration they see in finitely many hats. In particular, for every wizard their subset $T$ will consist of the true configuration along with all configurations which differ from the true configuration in finitely many hats, and therefore be the same for all wizards.

Now that all the wizards have constructed the same $T\subset S$, they use the well-ordering of $S$ to find the least element of $T$, and everyone guesses the hat colour which they have in the least element. Since every element of $T$ differs from the true configuration in finitely many hats, the configuration that the wizards guess will also differ in finitely many hats. Thus almost all wizards will choose correctly.

I heard about the problem on a list of good logic puzzles compiled by Philip Thomas. I purposely haven’t read his solution yet, since I didn’t want that to influence me while writing down my solution.

# An extended hat puzzle

Shortly after hearing about the hat puzzle I wrote about last month I came across an interesting extension of the problem, which replaces the 100 wizards with an infinite number of wizards:

A countably infinite number of wizards are each given a red or blue hat with 50% probability. Each wizard can see everyone’s hat except their own. The wizards have to guess the colour of their hat without communicating in any way, but will be allowed to devise a strategy to coordinate their guesses beforehand. How can they ensure that only a finite number of them guess incorrectly? You may assume the axiom of choice.

This seems paradoxical since somehow knowing about other wizard’s hats—which are chosen independently from a wizard’s own hat—allows each wizard to conclude that they will almost surely guess their hat colour correctly.

# Volume of a hypersphere in the 1-norm


$\lVert\x\rVert \leq R ,$

where $\lVert\x\rVert$ denotes the usual Euclidean norm (also known as the 2-norm),

$\lVert\x\rVert := \sqrt{x_1^2+\dotsb+x_n^2} .$

Today, I’d like to consider the problem of computing the volume of an $n$-dimensional hyphersphere in the 1-norm (also known as the Manhattan distance or taxicab norm), which is defined by

$\lVert\x\rVert_1 := \lvert x_1\rvert+\dotsb+\lvert x_n\rvert .$

The volume of the 1-norm hypersphere is given by the expression

$V_n(R) := \frac{(2R)^n}{n!} ,$

as we will show by induction on $n$. In the base case $n=1$ one has

$\newcommand{\d}{\,\mathrm{d}} V_1(R) = \int_{-R}^R\d x_1 = 2R ,$

as required. Now suppose that the formula holds in dimension $n-1$. Then we have

\begin{align*}
V_n(R) &= \int\limits_{\lvert x_1\rvert\leq R}\;\int\limits_{\lvert x_1\rvert+\lvert x_2\rvert\leq R}\dotsi\int\limits_{\lvert x_1\rvert+\dotsb+\lvert x_n\rvert\leq R}\d x_n\dotsm\d x_1 \\
&= \int\limits_{\lvert x_1\rvert\leq R}\;\int\limits_{\lvert x_2\rvert\leq R-\lvert x_1\rvert}\dotsi\int\limits_{\lvert x_2\rvert\dotsb+\lvert x_n\rvert\leq R-\lvert x_1\rvert}\d x_n\dotsm\d x_1 \\
&= \int\limits_{\lvert x_1\rvert\leq R} V_{n-1}\bigl(R-\lvert x_1\rvert\bigr) \d x_1 \\
&= \int_{-R}^R \frac{2^{n-1}(R-\lvert x_1\rvert)^{n-1}}{(n-1)!} \d x_1 \\
&= 2\int_{0}^R \frac{2^{n-1}(R-x_1)^{n-1}}{(n-1)!} \d x_1 \\
&= \frac{2^n}{(n-1)!}\biggl[-\frac{1}{n}(R-x_1)^n\biggr]_0^R \\
&= \frac{(2R)^n}{n!}
\end{align*}

By induction, the formula holds for all positive integers $n$.