- #1
TheDestroyer
- 402
- 1
Hello guys,
I need your help with understanding a fitting Algorithm, so that I could make it in a C++ program.
I have the following function:
g(f; f0, phi0, a, b) = phi0 + a ArcTan((f-f0)/b)
Which is a function of "f", frequency.
I would like to fit this function with the parameters f0, phi0, a and b.
I read many references about using Levenberg–Marquardt algorithm:
http://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm
but I'm getting always lost where I don't understand how the things turn to Matrices. Please read what I did and explain how I can continue in a simple language, because all websites explaining this thingy are going complicated so fast.
I understand the idea of minimising squares of difference between my function and data points as follows
χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) )^2 // chi square, the error function to be minimised.
where y_i are the data point number i, that corresponds to the point f_i.
I calculated the derivatives of the function with respect to the parameters f0, phi0, a, b, let's call these derivatives g_f0, g_phi0, g_a, g_b. We set all these as a row matrix, and call it Jacobian (as far as I understood it).
J = {g_f0, g_phi0, g_a, g_b} //derivatives at the
δ = {f0, phi0, a, b}
and we multiply it with our parameters, after evaluating the derivatives at the initial points. Then we substitute this in χ^2:
χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) - J.δ)^2.
The way I understand it, I think there has to be a way to know what the values of delta this and next time has to become, so that the process is really "iterative".
The question is: how do we determine delta? and how do we determine the direction and magnitude of its update to reach the right function?
Thank you for any efforts
I need your help with understanding a fitting Algorithm, so that I could make it in a C++ program.
I have the following function:
g(f; f0, phi0, a, b) = phi0 + a ArcTan((f-f0)/b)
Which is a function of "f", frequency.
I would like to fit this function with the parameters f0, phi0, a and b.
I read many references about using Levenberg–Marquardt algorithm:
http://en.wikipedia.org/wiki/Levenberg–Marquardt_algorithm
but I'm getting always lost where I don't understand how the things turn to Matrices. Please read what I did and explain how I can continue in a simple language, because all websites explaining this thingy are going complicated so fast.
I understand the idea of minimising squares of difference between my function and data points as follows
χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) )^2 // chi square, the error function to be minimised.
where y_i are the data point number i, that corresponds to the point f_i.
I calculated the derivatives of the function with respect to the parameters f0, phi0, a, b, let's call these derivatives g_f0, g_phi0, g_a, g_b. We set all these as a row matrix, and call it Jacobian (as far as I understood it).
J = {g_f0, g_phi0, g_a, g_b} //derivatives at the
δ = {f0, phi0, a, b}
and we multiply it with our parameters, after evaluating the derivatives at the initial points. Then we substitute this in χ^2:
χ^2 = Ʃ (y_i - g(f_i; f0, phi0, a, b) - J.δ)^2.
The way I understand it, I think there has to be a way to know what the values of delta this and next time has to become, so that the process is really "iterative".
The question is: how do we determine delta? and how do we determine the direction and magnitude of its update to reach the right function?
Thank you for any efforts