Analytic continuation

Many of the functions that one meets in calculus, including trigonometric (sine, cosine, tangent), exponential and logarithm functions, can take complex numbers as its argument (the "input"), and the value of the function (the "output") would also be a complex number. One might think that there are many different ways to "extend" or "continue" these functions to the complex domain, but—with a mild criterion—there is a unique way to do so. Since the time of Lagrange, these and a host of other functions are known as "analytic", which has come to mean something precise—locally it can be represented by a power series—and in the complex domain is equivalent to this rather mild criterion. The process and rules of analytically continuing a function (i.e., extending the function's domain so as to remain analytic), not just from real to complex, but from one part of the complex plane to another, have come to be an important object of study in complex analysis. Despite its reputation of being difficult and mysterious, analytic continuation embodies the "essence" of functions of (one or more) complex variables—that the identity of a function is completely determined, or "encoded", by its values over an arbitrarily small neighborhood, anywhere—and deserves to be made a central theme of the whole subject.

Extending familiar functions to complex numbers is not just an idle pursuit by the pure mathematician, but it often provides new insights, or even holds the key, to questions that one might ask on the real line. For example,
 * Is there a way to define $$\log$$ (always the natural log) of negative numbers?


 * Why does the Taylor series of $$e^{-x^2}$$ converge for all $$x\in\mathbb R$$, but that of $$\arctan(x)$$ or $$\frac{1}{1+x^2}$$ only does for $$|x|\leq 1$$, even though they are infinitely differentiable for all $$x\in\mathbb R$$?
 * Why is it that we typically only need to check trigonometric identities for acute angles, and they are automatically true for all reals?
 * What do people mean when they say that $$1+2+3+\cdots = -\frac{1}{12}$$?

Prior knowledge of complex analysis is not assumed except some acquaintance with the algebra of $$z=x+iy$$. For a quick visual introduction, specifically aiming at the last question, see Visualizing the Riemann zeta function and analytic continuation by 3blue1brown. Due to a lack of illustrative graphics, this article shall, for the most part, stay away from the geometric perspective.

Prelude: Analytic continuation on the real line
It is possible to discuss analytic continuation on the real line, and it may be instructive to do so. The trigonometric functions are originally defined for acute angles, i.e., for $$x\in[0,\pi/2]$$, and it is not a trivial matter as it may seem to extend to all reals. Yes, we are told in school to define it by way of the unit circle, which has the appeal of being periodic and "sine-wavy", but how are we sure that the plethora of identities that are derived from geometry would still be valid for all real numbers? Instead, let's take the identities $$\sin x =2\sin {x\over 2} \cos {x\over 2}$$ and $$\cos x=\cos^2\frac{x}{2}-\sin^2\frac{x}{2}$$ as the definition of sine and cosine, respectively, for $$x\in(\pi/2, \pi]$$, but does it still hold that $$\sin^2 x+\cos^2 x=1$$ there? That's not hard to check (algebraically), and we can proceed to analytically continue it again, to $$x\in(\pi, 2\pi]$$, and so on to infinity. As cumbersome as it is, the same process is one of most useful techniques of analytic continuation, namely via a functional equation. We shall revisit it later.

From real to complex
The most basic type of functions would be the polynomials, such as $$f(x)=ax^2+bx+c$$, and to extend that to the complex plane we only need to know how to add and multiply two complex numbers. Moreover, the complex numbers form a field, i.e., we can also divide a complex number by any nonzero complex number, so we can easily extend any rational function, i.e. the quotient of two polynomials, to the complex plane, minus a finite number of points. The rule seems to be that, as long as it is expressed with algebraic operations, we have no problem extending it to complex numbers. (That is just the "mechanical" or "algorithmic" aspect of what the function is; to get a more complete picture, we would need to see the geometry of complex numbers, which incidentally is the key to the Fundamental Theorem of Algebra.)

What about this function?$$f(x)=\begin{cases} x^2 & x\geq 0 \\ 0 & x<0. \end{cases}$$We can extend the $$x^2$$ part and the $$0$$ part separately, and have them meet at the vertical (imaginary) axis. Looking at the point $$z=i$$, the value $$f(z)$$ is $$0$$ on one side and $$-1 $$ on the other, so it's not continuous there. You may try to push the boundary to the left or right, but it can never be continuous, let alone differentiable. Even at the vicinity of $$z=0$$, the two sides don't "fit together" even though along the real axis it is differentiable. You may try functions $$f(x)$$ that are twice (or more) differentiable, and you would run into the same problem. It's best that we declare that such $$f(x)$$ can't be extended to the complex domain. We might add to our rule of thumb that "piecewise defined" functions, including expressions that uses the absolute value, are not permitted.

Next, consider transcendental functions, e.g., $$f(x)=e^x$$. One approach is to approximate it by polynomials, which can be extended to the complex plane. Indeed, given the Taylor series$$e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+\cdots,$$it is natural to define $$e^z$$ to be the limit of the successive polynomial approximations, i.e., the power series, which does converge for all $$z\in\mathbb C$$. That is a quick and efficient way to extend functions, including sine and cosine, to the whole complex plane.

We can now evaluate $$\sin(i)$$:$$\sin(i)=i-\frac{i^3}{3!}+\frac{i^5}{5!}-\cdots = i\,(1+\frac{1}{3!}+\frac{1}{5!}+\cdots)=i\,\frac{e-e^{-1}}{2}.$$The connection between trigonometric and exponential functions is much celebrated in Euler's identity: $$e^{ix}=\cos x+i\,\sin x$$, which is both a gem in itself and extremely useful in deriving other formulas.

What about $$\log x$$? Its Taylor series at $$x=1$$, is$$\log(x)=(x-1)-\frac{1}{2}(x-1)^2+\frac{1}{3}(x-1)^3-\cdots $$and for complex $$z$$, the series converges only inside the disk $$|z-1|<1$$. It is a general rule that any power series would converge in a circular disk, and diverge outside it (and may or may not converge at a point on the circle). The "radius of convergence" from calculus is literally a radius.

Now we have analytically continued the $$\log $$ to the disk $$|z-1|<1$$, we still can't evaluate $$\log(i)$$. Had we started with the Taylor series at a different point, say $$x=2$$, we would have extended $$\log(z)$$ to a bigger disk, namely $$|z-2|<2$$. Crucially, the two extensions agree on the overlap. Furthermore, nothing stops us from taking the Taylor series off of the real axis, and curiously the disk of convergence always has the origin on its boundary circle, i.e., it converges in as big a disk as possible, for we know $$\log(x)$$ must have a "singularity" at $$x=0$$. We could then in theory reach $$\log(i)$$ by successive Taylor expansion, even though it is difficult to calculate with.

There is another way to analytically continue $$\log(z)$$, and that is to start from the formula$$\log x = \int_1^x \frac{dt}{t}$$and now to integrate from 1 to a complex point $$z$$, i.e., we will need to pick a path in the complex plane. For illustration, we shall carry out the calculation for $$\log(i)$$. Let the path be the "quarter circle" from 1 to $$i$$, so that the $$t$$ in the integrand is of the form $$t=\cos\theta+i\sin\theta$$ (a change of variable, if you will), so—blindly following the calculus of differentials—we have $$dt=-\sin\theta\,d\theta + i\cos\theta\,d\theta = it\,d\theta$$, so that$$\log(i) = \int_1^i \frac{dt}{t} = \int_0^{\pi/2} i\,d\theta = i\, \frac{\pi}{2}$$That may not be as surprising as it seems if we turn it around: $$e^{i\pi/2}=i$$, which is Euler's identity. But what if we had taken a different path, say from 1 straight up to $$1+i$$, then straight to the left to $$i$$? Again trusting calculus,$$\log(i) = \int_1^{1+i}\frac{dt}{t} + \int_{1+i}^i\frac{dt}{t} = \int_0^1 \frac{idy}{1+iy}+\int_1^0\frac{dx}{x+i}$$but they are not obvious to evaluate. However, we could write a computer program to approximate the integrals, which would only involve algebraic operations, and we'd find that it matches with the earlier result, $$i\,\frac{\pi}{2}$$. It is the hallmark of complex analysis that integration in the complex domain is independent of the path taken (with important caveat; see later), and to even define these integrals properly is part of the standard course in complex analysis (see Cauchy's integral theorem).

Now we can answer the question of log of negative numbers. The singularity at zero poses an obstacle to analytic continuation, so we have to "get around" it. It turns out the answer$$\log(-x)=\log(x)\pm i\pi \qquad x>0 $$depends on which way to go around it (going above or below it), but not on the specific method (either by succesive Taylor expansion or path integral). That's why in real variable calculus, we leave log of negative numbers undefined. Now we have to make a choice: whether to choose one of the values (and accept that the function is no longer continuous on the negative real axis), or leave it undefined, so the domain of log is a simply-connected region, meaning that a loop inside it can always be shrunk to a point. A third option is to keep analytically continuing, and accept that the function shall be multi-valued. More on this later.

The general rule seems to be that, if a function can be expressed as an integral, then we can extend it to as large a simply-connected domain as the integrand makes sense. Other examples include the inverse trigonometric functions:$$\arcsin x=\int_0^x \frac{dt}{\sqrt{1-t^2}}$$and$$\arctan x=\int_0^x\frac{dt}{1+t^2}$$where the "obstacles" are located at $$t=\pm 1$$ and $$t=\pm i$$, respectively.

Of a different flavor is the Gamma function$$\Gamma(s)=\int_0^\infty t^{s-1}e^{-t}dt$$which makes sense (i.e., the integral converges) for $$\operatorname{Re} s>0$$. To extend it to the left, we make use of the functional equation (an equation relating the function with itself)$$\Gamma(s)=(s-1)\,\Gamma(s-1)$$which, among other things, implies that $$\Gamma(n)=(n-1)!$$ at the positive integers (that's why the $$\Gamma$$-function is an "interpolation" of the factorial). That is, let $$\Gamma(s)=\frac{\Gamma(s+1)}s$$ be the definition for $$-1<\operatorname{Re} s\leq 0$$. Having analytically continued the $$\Gamma$$-function one strip to the left, we can use the same formula for $$-2<\operatorname{Re} s\leq -1$$, and so on.

For a functional equation that involves the derivative, consider for any smooth function $$\phi(x)$$ with compact support,$$\Gamma_\phi(s,x) = \int_{-\infty}^x (x-t)^{s-1} \phi(t)\,dt$$which again converges for $$\operatorname{Re} s>0$$, with $$x$$ fixed (and real). It is easy to check that $$\frac{d}{dx}\Gamma_\phi(s,x)=(s-1)\,\Gamma_\phi(s-1,x)$$, so we can analytically continue to the full $$s$$-plane as before, strip by strip. This goes to show that complex functions go well beyond those isolated examples of special functions; here we have one for each $$\phi$$ and each $$x$$.

Let's conclude this long overview with a summary of techniques of analytic continuation:
 * direct extension of algebraic operations
 * (succesive) Taylor series expansion
 * integral along a path in the complex domain
 * functional equation

The big question that should be answered is: Why in the universe should all these different analytic continuations result in the same function? We shall first explore this principle of analytic continuation and its implications, then discuss the various obstacles of analytic continuation and how the principle may fail, and finally what the "mild condition" is and the theorem that formalizes the principle — almost the opposite of the logical order.

The Principle of Analytic Continuation
It is a remarkable feature that analytic continuation, if possible, is unique: two different methods of analytic continuation would have to agree, and we are free to choose whichever method that is convenient.

Obstructions to analytic continuation
We have seen that analytic continuation can have an obstruction if the function "blows up", or has a singularity, and that we may "get around it" by entering the complex plane. One could classify the different ways that analytic continuation is achieved.


 * The obstruction is a single point, and analytic continuation from the two sides of it actually agree. The standard examples are the rational functions, which has a pole whenever the denominator is zero. In general, the quotient of two holomorphic functions also gives rise to poles, e.g., $$\tan(z)=\frac{\sin(z)}{\cos(z)}$$, and such functions are called meromorphic. For technical reasons, there is one other type of singularity that is not considered a pole.
 * The obstruction is a single point, but analytic continuation from the two sides don't agree. Examples are algebraic functions that involve radicals, such as $$\frac{1}{\sqrt{1-x^2}}$$. If we allow the two analytic continuations to carry on, as if they are on two separate "sheets" or branches, they may agree after meeting the second time. We have now constructed a Riemann surface on which the function is naturally defined. In other cases, such as $$\log(z)$$, it never comes to agree, so you get an "infinite-sheeted" Riemann surface.

Analyticity and holomorphic functions
As diverse as the ways of defining functions on the complex domain, there is a simple criterion that they all satisfy and turns out is all you need, and that is what's known by the term holomorphic, that $$f$$ be complex-differentiable throughout an open set (a domain) of the complex plane. It is the highlight of Cauchy's theory that holomorphic implies analyticity.

Contrast that with the real variable case, where functions that are differentiable are very far from being analytic. Even being infinitely differentiable (smooth) does not make it (real) analytic, by the existence of this function:$$f(x)=\begin{cases} e^{-1/x} & x>0 \\ 0 & x\leq 0 \end{cases}$$We see more clearly why it fails to be analytic.