This last semester, I ran a fun, informal master class in problem solving. Actually, a graduate student of mine — Yunfeng Hu — who is an expert problem solver, produced all the problems from the immense library he built up over his undergraduate career in China.

I believe that the art and culture of problem solving is not as widely valued in the USA as it ought to be. Of course there are those that do pursue this obsession and we end up with people with high scores on the Olympiad and Putnam competitions. But many (most?) do not develop this skill to any great degree. While one can certainly argue that too much emphasis on problem solving along the lines of these well known competitions does not help very much in making real progress in current research, I would argue that many have fallen off the other side of the horse — many are sometimes hampered by their lack of experience in solving these simpler problems.

Here is a problem that arose in our Wednesday night session:

Suppose that

(1)

for all , that is continuous and differentiable, and that .

Prove that everywhere.

Perhaps you want to fiddle with this problem before looking at some solutions. If so, wait to read further.

Here are three solutions: in these first three solutions we are dealing with and so we will denote by .

(Solution 1) Consider all the solutions to and . These are all curves in of the form and We note that if is any function that satisfies equation (1), then everywhere its graph intersects a graph of a curve of the form for some , the graph of must cross the graph of if we are moving form left to right the graph of moves from above to below the graph of . Likewise, crosses any from below to above, when moving from left to right in . Now, supposing that at some . Then simply choose the curves and as fences that cannot be crossed by (one for and the other for to conclude that can never equal zero. (Exercise: Verify that this last statement is correct. Also note that assuming is enough since, if instead then also satisfies (1) and is positive at )

(Solution 2) This next solution is a sort of barehanded version of the first solution. We note that equation (1) is equivalent to

(2)

and if we assume that on , then this of course turns into

. (3)

Assume that and divide by to get Integrating this, we have or

(4)

Now assume that . Define and Note that implies that and implies that Use equation (4) together with or , to get a contradiction if either or

(Solution 3) In this approach, we use the mean value theorem to get what we want. Suppose that . We will prove that on the interval .

(exercise) Prove that this shows that . (Of course, all this assumes Equation (1) is true.)

Assume that . Then the mean value theorem says that

(5)

for some . But using equation (1) and the fact that , this turns into By the same reasoning, we get that for some , and we can conclude that . Repeating this argument, we have

(6)

for some , for any positive integer . Because is continuous, we know that there is an such that . Using this fact together with Equation (6), we get

(7)

which of course implies that

Now we could stop there, with three different solutions to the problem, but there is more we can find from where are now.

Notice that one way of looking at the result we have shown is that if

(1) is differentiable,

(2) and

(3) for some , we have that when and ,

then

(8)

Note also that if we define

(9)

we find that

(10)

Let denote the continuously differentiable functions from If we define we find that not only is not all of , we also have functions satisfying whose . So we will restrict the class of functions a bit more. The space of continuously differentiable functions from to , , where (compact!), is closer to what we want. Now, contains only those functions which have a root in .

We will call the functions in functions with maximal growth rate . This is a natural moduli for functions when we are studying stuff whose (maximal) grow rate depends linearly on the current amount of stuff. Of course populations of living things fall in the class of things for which this is true. from the proofs above, we know that if , then it’s graph lives in the cone defined by exponentials. More precisely

If then for , and for we have

(Exercise) Prove this. Hint: use the first proof where instead of you use and let .

(Remark) Notice that Equation (10) and implies that scaling a function in by any non-zero scalar yields another function in As a result, we might choose to consider only

or

In both cases we end up with subsets that generate when we take all multiples of those functions by nonzero real numbers.

(Exercise) If we move to high dimensional domains, how wild can the compact set be and still get these results? It must clearly be connected, so in we are already completely general with our above.

Moving back to Equation (1), we can look for generalizations: for example, will this result hold when How about when maps from one Banach space to another? How about the case in which is merely Lipschitz?

Lets begin with

In this case, the appropriate version of Equation (1) is

(11)

where denotes the operator norm of the derivative and is the euclidean norm of in

Notice that

(12)

where is an dimensional row vector and is an dimensional matrix. (Thus the gradient vector is the transpose of the resulting dimensional row vector.) Now we can use this to get the result.

Let be the arclength parameterized line segment that starts at and ends at the The above equation tells us that

(13)

Thus, we can conclude that

which implies that

and we can proceed as we did in the second proof of the problem in the case that We end up with the following result

If and , then

for all

(Exercise) Show that this result implies that if f(x) = 0 anywhere, it equals 0 everywhere.

(Exercise) Show that this is implies the one dimensional result we proved above (the first theorem we proved above).

(Exercise) Our proof of the result for the case can be carried over to the case of where are Banach Spaces — carry out those steps!

We come now to the question of what we can say when we are less restrictive with the constraints on differentiability. We consider the case in which is Lipschitz. The complication here is that while we know that is differentiable almost everywhere, it might not be differentiable anywhere on the line segment from to .

Consider a cylinder , with radius and axis equal to the segment from Let . Since is differentiable almost everywhere, we have that . Therefore almost every segment generated by the intersection of a line parallel to the cylinder axis and the cylinder, intersects in a set of length . We can therefore choose a sequence of such segments converging to

Since exists almost everywhere on the segments and is continuous everywhere, we can integrate the derivatives to get:

And because is continuous we get that

so that we end up with the same result that we had for differentiable functions.

There are other directions to take this.

From the perspective of geometric objects, the ratio is a bit funky — for example, if volume of a set , where can be thought of as the center of the set, we have that will be a vectorfield times restricted to the . Thus, will be an -dimensional quantity and a -dimensional quantity. We would usually expect there to be exponents, as in the case of the Poincare ineqaulity, making the ratio non-dimensional.

On the other hand, one can see this ratio as a sort of measure of reciprocal length of the objects we are dealing with. From the perspective, this result seems to say that no matter what you do, you cannot get to objects with no volume from objects with non-zero volume without getting small (i.e. without the reciprocal length diverging). This is not profound. On the other hand, that ratio is precisely what is important for certain physical/biolgical processes. So this quantity being bounded has consequences in those contexts.

This does not lead to a new theorem: as long as the set evolution is smooth, the and are just a special case where and even though actually computing everything from the geometric perspective can be interesting, the result stays the same.

in order to move into truly new territory, we need to consider alternative definitions, other measures of change, other types of spaces. An example might be the following:

Suppose that is a metric space and . Suppose that is continuous and is a geodesic in the sense that for any three points in , , we have that

If:

(1) for any two points in the metric space there is a gamma containing both points and

(2) for all such , is differentiable

(3) and

then, we have that

(14)

And, again we get the same type of result for this case as we got in the Euclidean cases above.

(Exercise) Prove Equation (14).

(Remark) We start with any metric space and consider curves for which

.

We call such curves rectifiable. We can always reparameterize such curves by arclength, so that and . We will assume that all curves have been reparameterized by arclength. Now define a new metric

You can check that this will not change the length of any curve. Define an *upper gradient* of be any non-negative function such that .

Now, if , we again get the same sort of bounds that we got in equation (14) if we replace with . To read more about upper gradients, see Juha Heinonen’s book *Lectures on Analysis in Metric Spaces*.

While there are other directions we could push, what we have looked at so far demonstrates that productive exploration can start from almost anywhere. While we encounter no big surprises in this exploration, the exercise illuminates exactly why the result is what it is and this solidifies that understanding in our minds.

Generalization is not an empty exercise — it allows us to probe the exact meaning of a result. And that insight facilitates a more robust, more useful grasp of the result. While some get lost in their explorations and would benefit from touching down to the earth more often, it seems to me that in this day and age of *no time to think*, we most often suffer from the opposite problem of never taking the time to explore and observe and see where something can take us.

Pingback: Fun with simple analysis problems I: A Singular Integral Measure for $latex C^{1,1}$ and $latex C^1$ Boundaries | Notes, Articles and Books by Kevin R. Vixie