User:Addemf/sandbox/Variation

Variation
While studying Fourier series, in 1829 the mathematician Dirichlet proved essentially the following theorem.

If a function is piecewise monotone and periodic, then it equals its Fourier series.

This perhaps established the initial interest in monotone functions.

In 1881, the mathematician Camille Jordan understood that this condition was a sufficient condition, but wondered whether it was also necessary. That is to say, if a function equals its Fourier series then must it be piecewise monotone?

While investigating this question, it became natural to take any given function and consider "the parts of it where it increases" and "the parts where it decreases". However, we need a good way to capture this notion of "increasing and decreasing parts" rigorously.

Exercise 1. Derivatives Are a Distraction
I think a lot of readers will instinctively think that the derivative is a good way to capture this idea. That is to say, we may take a function f and decompose its domain into the sets of points $$D=\{x:f'(x)<0\}$$ and $$I=\{x:0\le f'(x)\}$$.

Then we could construct the functions $$f^-=f\mathbf 1_D$$ and $$f^+=f\mathbf 1_I$$.

Then we would have the decomposition of the function $$f=f^-+f^+$$.

Explain why this is not the best way to analyze the concept of an increasing and decreasing part for functions.

Variation
A better way to define the increasing and decreasing parts of functions, which does not require the derivative, is to try to decompose the function into any arbitrary number of parts. We can then use each part to try to get an approximation of the function's increase and decrease there. Predictably, we then let the number of points of decomposition go to infinity, in some sense.