From: ullrich@math.okstate.edu Subject: Re: A simple analysis question Date: Wed, 17 May 2000 17:33:52 GMT Newsgroups: sci.math Summary: [missing] In article , "David Petry" wrote: > > ullrich@math.okstate.edu wrote in message <8felkr$93b$1@nnrp1.deja.com>... > >In article , > > "David Petry" wrote: > >> > >> Given a sequence a_0, a_1, a_2 ... of complex numbers > >> such that sum | a_k | diverges, is it necessarily true that > >> the function f(x) = sum a_k*x^k is unbounded on the > >> unit disk? > > > There are also > >counterexamples continuous on the closed disk, and in fact > >examples where the given series converges _uniformly_ on > >the closed disk, even though sum |a_k| diverges. > > I'd like to see explicitly such a counterexample. In article , "David Petry" wrote: > > ullrich@math.okstate.edu wrote in message <8felkr$93b$1@nnrp1.deja.com>... > >In article , > > "David Petry" wrote: > >> > >> Given a sequence a_0, a_1, a_2 ... of complex numbers > >> such that sum | a_k | diverges, is it necessarily true that > >> the function f(x) = sum a_k*x^k is unbounded on the > >> unit disk? > > > There are also > >counterexamples continuous on the closed disk, and in fact > >examples where the given series converges _uniformly_ on > >the closed disk, even though sum |a_k| diverges. > > I'd like to see explicitly such a counterexample. When you said this I immediately posted a reference to an example in Zygmund. That wasn't the example I think of as the example that makes sense, it was just the quickest way to give an accurate answer. There's an example that seems much "nicer" to me, but presenting the proof that that example does the job would involve getting into Lipshitz this and tauberian theorem that... Until the other day - I realized that there's a totally self-contained, elementary and even fairly short proof that the example I like has the required properties. Short enough for a sci.math post, especially in a season where the place is dominated by people who like to brag about having made the longest mathematical post ever: Seems clear that finding trigonometric polynomials P such that |P| is everywhere much smaller than the sum of the absolute value of the coefficients has at least something to do with the problem (whether it's actually relevant or just seems analogous depends on what you know about analysis.) There's a clever construction due to Rudin and Shapiro of some polynomials that do exactly this (they're often called the Rudin-Shapiro polynomials): We start with P_0(t) = 1 Q_0(t) = 1. Then we define P_(n+1)(t) = P_n(t) + e^(2^n*i*t) * Q_n(t) Q_(n+1)(t) = P_n(t) - e^(2^n*i*t) * Q_n(t). So P_1(t) = 1 + e^(it), Q_1(t) = 1 - e^(it), P_2(t) = 1 + e^(it) + e^(2it) - e^(3it), Q_2(t) = 1 + e^(it) - e^(2it) + e^(3it), etc: P_n and Q_n are trigonometric polynomials of degree 2^n - 1, and all the coefficients are plus or minus 1. In particular the sum of the absolute value of the coefficients is 2^n. But it turns out that |P_n| and |Q_n| are everywhere much smaller than that. A person easily verifies that |P_(n+1)|^2 + |Q_(n+1)|^2 = 2 * (|P_n|^2 + |Q_n|^2), and hence that |P_n|^2 + |Q_n|^2 = 2^(n+1), so that |P_n|^2 <= 2^(n+1), or |P_n| <= c 2^(n/2). (Here and below "c" is the traditional "some constant, the value of which may vary from line to line".) The example we want is the function f(t) = sum(e^(1*2^n*t) * 2^(-n) * P_n(t)) . Note first that the exponentials in the definition of f serve to shift the coefficients, so that the polynomials we're adding, or rather the sequences of coefficients of those polynomials, have disjoint support. If we write f(t) = sum(a_k * e^(ikt)) then a_k = plus or minus (2^(-n) when 2^n <= k < 2^(n+1)) . So the sum of |a_k| is infinite, and all we need to do is show that the series sum(a_k * e^(ikt)) converges uniformly. The argument I've used for years to show this is very short but it requires some prerequisites: The fact that |2^(-n) * P_n(t)| <= c*2^(-n/2) shows that f lies in Lip_(1/2). In particular f is continuous, so the Fejer means of the Fourier series converge uniformly to F. It's also clear that |a_k| <= c/k, and now a theorem of Hardy shows that in fact the partial sums of the Fourier series converge uniformly, QED. But pretend I didn't say that. Realized the other day that if you look at it right it's almost obvious that the partial sums of the Fourier series converge uniformly. (Note that there is a subsequence of the partial sums for which it actually _is_ obvious. But we want the entire sequence of partial sums, not a subsequence). Notation: If g(t) ~ sum(c_k * e^(ikt)) is a Fourier series let S_n = S_n(g) denote the n-th partial sum, that is the sum of the terms with |k| <= n. Now in general define Mg = sup(|S_n(g)| , that is (Mg)(t) = sup(|S_n(g)(t)|). That's "M" for "maximal function" - it's a fact (well-known in certain circles) that studying various sorts of convergence reduces to studying various sorts of maximal functions; the M here is the maximal function that's useful in studying uniform convergence of Fourier series. What I realized the other day (this may be "new math", but I doubt it) is that not only is |P_n| <= c * 2^(n/2), in fact the stronger inequality (*) M(P_n) <= c * 2^(n/2) holds (with a different c). This actually follows from the fact that |P_n| <= c * 2^(n/2): If g is a trigonometric series and A is a set of integers let's say that A|S is the trigonometric series with coefficients equal to the coefficients of g on A and equal to 0 elsewhere. Now say that a "dyadic block of g of length 2^k" is a series g|I, where I is a "dyadic interval" of integers: I = {j*2^k, j*2^k + 1, j*2^k + 2, ... (j+1)*2^k - 1}. (for some integer j.) You show by induction that if p is a dyadic block of P_n of length 2^k then |p| <= c * 2^(k/2). This is clear, because in fact it's clear that |p| = |P_k| everywhere OR |p| = |Q_k| everywhere; this is what you show by induction. (For example P_(n+1) has two non-vanishing dyadic blocks of length 2^n, the first of which is P_n and the second of which is e^(i*2^n*t) * Q_n.) This implies (*): If 0 <= N < 2^n then N has a representation as a sum of distinct powers of 2 less than 2^n, and hence S_N(P_n) is a sum of dyadic blocks of P_n, of distinct lengths. SO it follows that |S_N(P_n)| can be no larger than the sum of c * 2^(k/2) for k = 0..n-1, but this sum is again c * 2^(n/2) (with a different c). So |S_N(P_n)| <= c * 2^(n/2) for every N, and (since this c does not depend on n or N, it's actually sqrt(2)/(1-1/sqrt(2)) or something) now (*) follows. And sure enough M has something to do with uniform convergence: In fact (*) shows that the partial sums S_N(f) converge uniformly: We need only show that the tail S_M(f) - S_N(f) converges to 0 uniformly. But this tail consists of (possibly) several blocks of the form 2^n <= k < 2^(n+1), together with an initial segment of one such block (the last one) and a terminal segment of another such block (the first one). The part of the sum corresponding to the block from 2^n to 2^(n+1)-1 is no larger than c * 2^(n/2), and (*) shows that the part corresponding to the two partial blocks satisfies a similar inequality. So if, say, 2^n <= N < 2^(n+1) and 2^m <= M < 2^(m+1) then (*) shows that |S_M(f) - S_N(f)| <= c * (2^(-n/2) + ... + 2^(-m/2)), which tends to 0 as N, M tend to infinity. (If you don't see where the minus signs came from you may be forgetting the 2^(-n) in the definition of f.) QED Sent via Deja.com http://www.deja.com/ Before you buy.