From: "Jack W. Crenshaw" Newsgroups: sci.math,sci.math.num-analysis Subject: Re: curve fitting - parameter estimation Date: 29 Nov 1998 14:44:43 GMT rdrewe wrote: > > A problem that arises all the time in research is that we have a > parametric equation which arises from a model, and we wish to find the > 'best fit" of this specific parametric equation to some data - ie, find > the particular values of the parameters which provide a global minimum > for some error function. (Note: I am NOT talking about *approximating* > the data with other functions, eg, Chebyshev approximation). Books on > applied statistics typically discuss only linear regression & polynomial > regression (solvable by Gaussian elimination). If they discuss > non-polynomial regression at all, they usually discuss transformations > to a linear form, but this is only applicable to a few special functions > such as hyperbolas and exponentials, and it causes highly nonlinear > weighting of the data. > > However, curve fitting with parameter estimation is a daily problem in > many research areas. The relevant parametric equation is never one of > the simple cases described above. Transformations to a linear form are > insufficiently general and are very undesirable because of parameter > space distortion. What general methods are available? I would welcome > either outlines of general algorithms or Web references. I found the > university library very unhelpful but perhaps I don't know the best > place to look. Can multiple analysis of variance be used? Is it > workable to write partial derivatives of the error function as a matrix > and use some general iterative scheme to solve it? (I got a bit lost > trying to do this). > > Thanks > > Ross Drewe > First, you must define what you mean by "best." If, as most people do, you mean "minimum sum of squares," then a least-squares fit _IS_ what you want. Consider any function phi(x, a) where x is the independent variable, and a is a vector of unknown parameters. Go through the usual process (i.e., assume a sample xi, yi), define an error M(a) = Sum[(phi(xi, a) - yi)^2] Now require that this error be minimized for some value of a. After you're done, you'll have an equation involving a matrix that's the partials of phi(x, a) wrt a. Your goal is to satisfy the equation. The linear equation (e.g., polynomial) assumption allows one to go much farther, by separating out the a's as linear coefficients. If, in addition, you assume that phi(x, a) is a power series in x, you get the familiar equation for the ordinary least squares polynomial fit. The general approach, however, works for non-linear equations as well. You may decide, however, that the least-squares error criterion is not what you want. You may prefer the minimax, a la Chebyshev. Again, the same methods apply, only you must do some (a _LOT_!) of iteration, instead of being able to write out the solution in one step. FWIW, there's a program, MicroMath Scientist, which will do all this for you. You simply give it the data set, and the equation, and it produces the best fit for the coefficients. I have Micromath Scientist, but don't have it installed on this computer, and can't lay my hands on the disk. So I can't give you an address. I _HAVE_, however, used it extensively. It seems to work quite well. Jack