From: Doug Magnoli Subject: Re: Two differential equations Date: Thu, 06 Jul 2000 09:23:56 GMT Newsgroups: sci.math Summary: [missing] I fed your second equation to http://mss.math.vanderbilt.edu/~pscrooke/detoolkit.html as a general first-order equation, and it gave: y = (2 - 2x^3) / (2x + x^4) which I checked, and it works--even gets the boundary condition right. -Doug Magnoli Florian Gyarfas wrote: > Hi everbody! > > I have two little diff. equations and I would appreciate if someone could > help me solving them. > > 1) y' = xy/(x^2-1) + e^x * sqrt(1-x^2) > with the initial condition y(0) = 1 > 2) y' = y^2 - 2/x^2 > with the initial condition y(1) = 0 > > I have no idea how to solve #2 (I think it's a special ricati equation). > Number 1 seems to be fairly easy. However, I've tried it a hundred times by > now and still don't get a solution. > > Please help me... Thanks so much in advance! > > And by the way, I can get the final solution with Maple, so what I need is a > detailed solution (incl. all the steps that lead to the solution). > > ************************************* > > Though probably not of any interest: > My - wrong - solution for 1: > > y' = xy/(x^2-1) + e^x * sqrt(1-x^2 > => yh' = x/(x^2-1) * y > =int=> log(y) = 1/2*int(2x/(x^2-1))dx > => y = sqrt((x-1)(x+1))*c(x) > => y = c(x) * sqrt(x^2-1) > => y' = c'(x) * sqrt(x^2-1) + x*c(x)/sqrt(x^2-1) (*) > > insert y into the original DE > > => y' = x/(x^2-1) * c(x)*sqrt(x^2-1) + e^x * sqrt(1-x^2) > > insert (*) into the equ above: > > c'(x) * sqrt(x^2-1) + x*c(x)/sqrt(x^2-1) = x/(x^2-1) * c(x)*sqrt(x^2-1) + > e^x * sqrt(1-x^2) > > => c'(x) * sqrt(x^2-1) = e^x * sqrt(1-x^2) > > => c'(x) = e^x * (sqrt(1-x)/sqrt(x-1)) > => c(x) = int (...) = -sqrt(-1)*e^x + c2 > => y = sqrt((x-1)(x+1))*c(x) = sqrt((x-1)(x+1))* ((-sqrt(-1))*e^x + c2) > => y = sqrt(1-x^2)*e^x + c2 * sqrt(x^2-1) > > Let c be 0 (just to test if the solution is correct) => y = sqrt(1-x^2)*e^x > => y' = - e^x(x-1+x^2)/sqrt(1-x^2) > > => Now insert y' and y into the original DE.... and unfortunately, it > doesn't match:-) > > Regards, > Florian ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: What is this? Date: 6 Oct 2000 14:22:35 -0400 Newsgroups: sci.math In article <8rkvdt$h0g$1@nnrp1.deja.com>, barometer wrote: :Ok, people, let me solve in the traditional way the differential :equation y'=y. :y'=y => : : dy :1) --=y => : dx : : dy :2) --=dx => : y : : dy :3) int --=int dx : y : :4) log|y|=x+c => : :5) y=(+/-)exp(x+c) => : :6) y=cexp(x). : :Consider the steps 1 and 2. I know, they are symbolic procedures. But :what they REALLY mean? I mean from the formal view point they are :nonsense. They seem to express some mystique rules concerning two :mysterious numbers dy and dx (surreal like). To make my point clearer :can someone *solve* (not guess the solution) this equation in :completely formally way? Can someone substitute this method of :separation of variables by a formal method? : You are right to have reservations about symbols that were not satisfactorily explained. There are other reservations, such as: by division without precautions, we may lose some solutions. (Sometimes we can restore them by other methods). Answer #1: So, y'=y has an obvious solution y=0 (constant) which we lose if we divide by y carelessly. If y is non-zero at a point then it is non-zero on an interval (because of continuity), and we can indeed divide: y' / y = 1 Left side is (1/y) * y', which is, by Chain Rule, the derivative of ln(abs(y)). So, (d/dx) (ln(abs(y))) = (d/dx) (x) (d/dx) (ln(abs(y)) - x) = 0 and we know that on an interval, the only functions with zero derivative are constants. You can take over here - there is the chore of determining the interval of x; luckily, it turns out to be all of R.) Answer #2: (Exploiting a lucky guess): We know that exp(x) is one of the solutions, and it is never zero, so we can divide by it without exception. Assume that y'=y, and differentiate, using Quotient Rule: (d/dx) (y/exp(x)) = (y' * exp(x) - y * exp(x)) / (exp(x))^2 = (y' - y) / exp(x) = 0 because y' - y = 0 by assumption. So, y / exp(x) is a constant function on all of R, etc. Remark: For the equation y' = 3*y^(2/3) , division by y without covering the possibility of y=0 on some interval is even more harmful. It is quite educational to find what may happen. Hope some of it helps, ZVK(Slavek) ============================================================================== From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) Subject: Re: What is this? Date: 6 Oct 2000 18:54:31 -0400 Newsgroups: sci.math In article <8rldvt$rji$1@nnrp1.deja.com>, barometer wrote: >In article <8rl59b$cl5@mcmail.cis.McMaster.CA>, > kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik) wrote: [...] :> Remark: :> :> For the equation y' = 3*y^(2/3) , division by y without covering :> the possibility of y=0 on some interval is even more harmful. It :> is quite educational to find what may happen. :> : :Oh I know that. This particular equation has not a unique :solution. y=0 and y=x^3 pass from the origin. Thus if we divide by y :without covering y=0 before, we loose a solution. Right? And :moreover the above example shows that something more than mere :continuity is needed to guarantee uniqueness. Lipschitz. The failure of uniqueness is a little more amusing: besides y=0 and y=(x-c)^3 for a constant c, we have "spline" solutions for every pair of a, b with a barometer writes: > Ok, people, let me solve in the traditional way the differential > equation y'=y. > [reformatted] > y'=y => > 1) dy/dx = y > 2) dy/y = dx > [snip] > > Consider the steps 1 and 2. I know, they are symbolic procedures. But > what they REALLY mean? I mean from the formal view point they are > nonsense. The only thing close to "nonsense" about this whole computation is the notation that "dy/dx" is a derivative. First, we should adopt the notation that a prime (e.g., y') means derivative with respect to x. So, using the other notation, y' = dy/dx . (There are other notations, like y'=Dy=y_x etc.; any one of these works. Just don't use dy/dx in what follows.) Now we can get on with what the manipulations really mean. The full story involves something called differential forms and something called jet bundles. Here's a quick summary (and, I should emphasize, only a small part of the story - but hopefully enough.) Differential forms: suppose you are working with coordinates (variables) x and y. Introduce two new symbols, "dx" and "dy". There are rules for what you can do with these new objects; included in these rules is that they are the basis of a vector space. As a consequence, writing down expressions like (x*y) dx + (1/y) dy makes sense. (You should interpret "(1/y) dy" as the product of (1/y) and dy.) Jet bundles: In this context, you actually need three coordinates (variables): x, y, and p. The thing that makes a jet bundle what it is, and not merely "physical space" involving three variables, is that there is something extra: the equation 0 = dy - p dx . It is a fact that (almost) any curve through (x,y,p)-space, for which the equation 0 = dy - p dx holds, is actually of the form (x,y=f(x),p=f'(x)). In other words, the equation 0=dy-pdx completely describes (in some sense) "functions and their derivatives"; and the variable p represents the derivative. Now, we are interested in the differential equation y'=y. Because p represents the derivative, we are interested in the equation p=y. Actually this means we are now interested in the system of two equations: p = y 0 = dy - p dx (because we *always* have the second equation on the jet bundle). Stick the first equation into the second, and 0 = dy - y dx or, dividing by y, 0 = (1/y) dy - dx or (1/y) dy = dx . As (sort of) a side note, it turns out that your step (3), int dy/y = int dx, can be bypassed using the differential forms approach; this really involves the "differential" part of it, and it goes like this: 0 = (1/y) dy - dx = d[log(|y|)-x] C = log(|y|)-x . (Technical note for the experts: the equations like 0=dy-ydx should be interpreted as follows. You are looking for a function u:t->(x,y) such that u*(dy-ydx)=0, where u* is the pullback. Since pullback is linear, you can do all the linear operations without worrying about changing the equation. Now look at 0 = u*(dy-ydx) = y'(t)dt-y(t)x'(t)dt = [y'(t)-y(t)x'(t)]dt ; clearly this is equivalent to 0 = y'(t)-y(t)x'(t) and therefore, in this case, you *could* formally replace "dy" by "y'(t)", and "dx" by "x'(t)". Having done so, you can multiply and divide by x'(t) at will; hence, you can multiply and divide by dx. You can see how much more complicated this gets with several independent variables, which is why it is much more subtle to manipulate partial derivatives formally as fractions. Finally, you really want y as a function of x, not as a function of t; so you impose the independence condition that x'(t)<>0, or dx<>0, and use the implicit function theorem to go from (y(t),x(t)) to y(x).) Anyway, this stuff involving differential forms and jet bundles does justify the manipulations between your steps (1) and (2). (And it does a whole lot more than that.) And, having understood this justification, you can go back to writing dy/dx. > They seem to express some mystique rules concerning two > mysterious numbers dy and dx (surreal like). That's actually a good observation, although the description is not quite correct. dy and dx are NOT numbers, hence their "surrealism" and "mystery". They are part of a different kind of algebraic system, in which the usual rules don't necessarily apply. For example, dy and dx can be "multiplied" to form what we call the "exterior product", dy^dx. (The "^" is not a superscript, it really is an upside-down "V".) But dy^dx is not the same as dx^dy; rather, dy^dx=-dx^dy. The whole thing is called exterior algebra. > To make my point clearer > can someone *solve* (not guess the solution) this equation in > completely formally way? Well, you just did - it's not that hard, for this particular equation, to isolate all the "y" stuff on one side and all the "x" stuff on the other, and then just solve the thing. However, there is an obvious more general question: if I write down any old differential equation, can you, without guessing anything, write down the solution? The answer is no - some differential equations have solutions which simply cannot be "written down" (just like some indefinite integrals, like \int e^{x^2} dx, cannot be "written down" - of course you have to define "write down"). But: there is (sort of) an algorithm for looking at a differential equation, and producing an appropriate manipulation, or change of variables, which will let you write down the solution, assuming that the differential equation satisfies certain (complicated) conditions. In fact, all of the "classical tricks" for solving ordinary differential equations can be derived using this method. (Unfortunately the method is often not very practical, so it's better in practice to simply memorize all the tricks.) The basic idea of the algorithm is that you find a "symmetry" and use it to construct a better system of coordinates, i.e., find a variable substitution which makes the differential equation look much nicer. > Can someone substitute this method of > separation of variables by a formal method? I think I did that. One thing to note: the formal method (0=dy-pdx, etc.) will not usually help you find a nice substitution, or tell you *how* to separate variables, or anything like that; for these things, you need the (sort of) algorithm I mentioned, or perhaps other machinery. The formal method does, however, have other applications, and is much more powerful when dealing with partial differential equation systems. Kevin.