Measure Theory/Integrating Derivatives

Integrating Derivatives
We now turn, as promised before, to the task of finding conditions in which the second part of the FTC holds. That is to say, we find when it makes sense to assert


 * $$\int_{(a,b)}f' = f(b)-f(a)$$

However, we first will need a detour through monotone functions, and then functions of bounded variation. It will not initially be apparent how these topics relate to our goal in this section, so I have to only promise that eventually, they will.

That said, it is not as though this alternate line of inquiry "comes out of nowhere". As I will try to describe briefly below, the original motivation was to answer important applied questions about Fourier series.

Fourier and Monotone Functions
It is yet another opportunity to marvel at just how much of modern mathematics is an inheritance of Fourier series.

At some point, the mathematician Dirichlet discovered that a function is equal to its Fourier series on the condition that it is monotonic. This caused mathematicians to become further interested in what else can be accomplished by studying monotone functions.

It was pretty immediately apparent that piecewise monotone functions has similar properties regarding the convergence of Fourier series.

Roughly speaking, one only has to find the Fourier series on each interval where the function is monotonic. Then one may combine the various Fourier series in a natural way.

Subsequent to all of this, mathematicians realized that monotonicity is sufficient to prove that a function is differentiable almost everywhere. The proof of this fact will important to us and therefore we will spend a lesson proving it.

Variation
Decades after Dirichlet, the mathematician Jordan realized that we needn't stop at monotonicity. In fact we could "push" the concept of piecewise monotonicity to a new extreme. This new extreme is the concept of a function of "bounded variation".

Let us approach the concept of bounded variation by trying to invent it ourselves, through a few considerations.

Let us fix the interval [0,1] and imagine just how badly a function may oscillate from increasing to decreasing, on this interval. Of course it may do so any finite number of times, and it is not too hard to come up with simple examples of a function switching from increasing to decreasing n times for any n.

In fact it is not very hard to construct an example of a function switching from increasing to decreasing any countable number of times. The reader is encouraged to do so now, if she is so inclined.

But the fact that a function may be so pathologically oscillating, then threatens that we may not be able to write it as a sum of an increasing part and a decreasing part -- which would then block our ability to say that it equals its Fourier series.

The reader may be interested, at this moment, to consider the extreme case of the Dirichlet function, $$\mathbf 1_{\Bbb Q}$$. On any non-degenerate interval, this function in a sense "oscillates" infinitely often, between 0 and 1.

What Jordan realized is that we would like to, in some sense, "infinitely partition" a given function, and try to capture a notion of the change in the function, either in the positive or negative direction.

It is possible to do this in a fairly literal sense. One may take any partition of the compact interval [a,b], which we write as $$P=\{x_0=a<x_1<\cdots<x_n=b\}$$. Then define the sum of the positive changes, $$V_f^+(P)=\sum_{k=1}^n(f(x_k)-f(x_{k-1}))^+$$, and then define the positive variation as the supremum taken over all partitions,


 * $$\mathcal V_f^+([a,b])=\sup_P \{V_f^+(P)\}$$.

If we then use this to define a function of x by letting the end of the interval be a variable, $$P(x) = \mathcal V_f([a,x])$$, then this function P essentially "increases in the same way that f increases".

One could then go on to define the negative variation of f and use this to try to split f into the positive and negative variation parts.

However, it will simplify our work to not have two different objects, the positive and the negative variation. For most of our work, we can accomplish all of the same goals using just a single object, the total variation.

Show that the Dirichlet function has infinite variation. For concreteness, you may prove that $$\mathcal V_{\mathbf 1_{\Bbb Q}}([0,1]) = \infty$$ and then suggest that the same happens on every non-degenerate interval.

Show that for any function, its total variation function is a monotonically increasing function.

Bounded Variation and Up-Down Decomposition
What Jordan found is that, if a function has bounded variation, then it always has an increasing part and a decreasing part. In fact, it turns out that we can say even more than this.

Note that both parts are monotonically increasing, but because we subtract D, this "accounts" for the portion of f which is decreasing.

Also notice that, so long as the total variation function is finite at every point, then we have the equality


 * $$f = T-(T-f)$$

and therefore, so long as $$T-f$$ is monotonically increasing, then this will be the up-down decomposition that we were seeking.

In this exercise I will guide you through the proof of the following, surprisingly-not-too-hard-to-prove theorem.

Theorem: Let $$f:[a,b]\to\Bbb R$$ be any function. Then f has bounded variation if and only if there exist two monotonically increasing functions, $$U,D:[a,b]\to\Bbb R$$ such that $$f=U-D$$.

1. Assume that f has bounded variation. We will set U = T, as we have noted above, and $$ D = T-f$$. So all that we must do is prove that $$T-f$$ is monotonically increasing. By definition, you need to show that if $$a\le x<y\le b$$ then $$T(x)-f(x)\le T(y)-f(y)$$. It is natural to re-arrange this with similar terms grouped together. $$f(y)-f(x)\le T(y)-T(x)$$ It should make some sense to expect that $$T(y)-T(x) = \mathcal V_f([x,y])$$. But rather than going straight for this result, it may be easier to prove a lemma first: For any $$c\in[a,b]$$ we have $$\mathcal V_f([a,b]) = \mathcal V_f([a,c])+\mathcal V_f([c,b])$$. It should also be a one- or two-line proof to show that $$f(y)-f(x)\le \mathcal V_f([x,y])$$.

2. Now assume f has the up-down decomposition given by U and D, and show that f has bounded variation. Hint: Start by considering any partition P, and the variation $$ V_f(P)$$. Now split this up into positive and negative variation, as described at the start of the subsection Variation. Show that the positive variation is upper-bounded by $$U(b)-U(a)$$ and from here the rest of the solution may be clear enough.

Dini Derivatives
Finally I want to introduce a concept which will allow for a certain amount of simplification later. First let's establish some notation and make a few observations:

The derivative of f at x is then the limit as h goes to zero in the difference quotient.


 * $$f'(x)=\lim_{h\to 0}\text{Diff}_hf(x)$$

However, there are a number of different ways that this limit may fail to exist.


 * The limit may be infinity.
 * The left-handed limit may exist but the right-handed limit may be infinity.
 * The two handed limits may exist but fail to equal each other.
 * The left-handed limit may not be infinity but it may still exist.

And so on, with many other variations.

Our proofs in the next several lessons will become simpler if we instead focus on the limsup of the difference quotient, rather than the limit. This is because the limsup is guaranteed to exist, if we count "being infinite" as an extended kind of "existence".

We will similarly also consider the liminf, and handed limits, each separately. The derivative then exists when all four of these quantities are equal and nonzero.

Let $$f:[a,b]\to\Bbb R$$ be any function defined on a non-degenerate interval, and let $$x\in (a,b]$$. Show that the upper-left derivative of f exists at x as a finite real number, or is infinite.

Infer that the remaining Dini derivatives also exist or are infinite.