Can linear least squares be used for inverse function approximation?

In summary, the person is using linear least squares via SVD to find the coefficients of a ten term polynomial and wants to be able to find a function that can take an input and output a value that will match the predicted deviation given by the polynomial. They are unsure of what terminology to use and have asked for help in understanding the concept, specifically in regards to inverse least squares and solving for roots or doing a new least squares on the inverse function.
  • #1
xactmetric
4
0
Hi,

Forgive me if the subject of this post is not accurate, I'm not quite sure what the correct terminology would be for what I'm trying to figure out.

Currently I am using linear least squares via SVD to find the coefficients of a ten term polynomial, say f. This model allows me to predict some output y given some input x. Thats straightforward.

What I would like to be able to do is turn this around and find a function g where g would take an input x, and output some y' such that f( y' ) = x.

Is this familiar to anyone? I'm totally stumped.

xactmetric
 
Physics news on Phys.org
  • #2
Why can't you do that now? In linear least squares, you are given a list of points, [itex]{(x_1, y_1), (x_2, y_2), \cdot\cdot\cdot, (x_n, y_n)}[/itex] and you construct an equation y= ax+ b whose graph passes close (in the least squares sense) to each of those points. Given any other x, the "predicted" y value is ax+ b. But linear equations are easily solved. Given any y, the "predicted" x value is just (y- b)/a. All the work has already been done in using linear least squares to find a and b.
 
  • #3
Yes but my equation is nonlinear. Let me try and explain better.

Originally I start with two lists of points. Let's put them all on the x-axis for simplicity. The first list I will call actual coordinates. The second list I will call deviated coordinates. The deviated coordinates are simply the actual coordinates plus some offset. My prediction function f just predicts the deviation of an inputted actual coordinate.
So if I input any actual coordinate x into f, f will give me a predicted deviation. So my predicted deviated point is x + f(x).

Now say I know in advanced a point x. I need to find a way to determine a point x' such that f( x' ) gives me the deviation I need such that x' + f( x' ) = x.

Does this make sense? Or am I missing something totally obvious?

xactmetric
 
  • #4
I don't think you are using the right terminology. I would think for inverse least squares you might look at something like this:

http://signals.auditblogs.com/2007/07/05/multivariate-calibration/

Anyway, so basically what it sounds like is you fit a polynomial to a lower order polynomial using least squares. You have two choices. You can either find the roots of this lower order polynomial, or instead you can do a new least squares on the inverse function.
 
  • #5
Thanks, but can you please elaborate a bit. I'm not following you.

xactmetric
 
  • #6
xactmetric said:
Thanks, but can you please elaborate a bit. I'm not following you.

xactmetric

Can you first try to write down the exact problem you are trying to solve clear.
 
  • #7
The problem is exactly as described in my second post. If you can say what's not clear, I'll try and explain a bit more.
 

Related to Can linear least squares be used for inverse function approximation?

1. What is inverse linear least squares?

Inverse linear least squares is a mathematical method used to estimate the parameters of a linear regression model. It involves minimizing the sum of the squared differences between the observed data points and the predicted values from the model.

2. How does inverse linear least squares differ from regular linear least squares?

In regular linear least squares, the goal is to minimize the sum of squared errors between the observed data and the predicted values. In inverse linear least squares, the focus is on minimizing the sum of squared differences between the predicted values and the observed data.

3. When is inverse linear least squares used?

Inverse linear least squares is commonly used when the relationship between the independent and dependent variables is nonlinear. It can also be used when the errors in the data are not normally distributed.

4. What are the advantages of using inverse linear least squares?

Inverse linear least squares can provide more accurate parameter estimates compared to regular linear least squares when the data is nonlinear. It can also handle errors that are not normally distributed.

5. Are there any limitations to using inverse linear least squares?

One limitation of inverse linear least squares is that it can be computationally intensive, especially for large datasets. It also requires a good initial guess for the parameters, as it may converge to a local minimum instead of the global minimum.

Similar threads

  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
675
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Advanced Physics Homework Help
Replies
0
Views
412
  • Linear and Abstract Algebra
Replies
5
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
456
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
Back
Top