When will fixed point iteration converge?

  • Thread starter snoopies622
  • Start date
In summary, Recently, a computer program was developed by my dad to find an approximate solution to the "crossed-ladders problem" for inputted ladder lengths and height of ladder intersection. The program uses a method called "fixed point iteration" to find the square of the width and returns the square root of the result after ten cycles. However, it is not fully understood why this method works, especially when applied to random polynomial functions. It is believed that fixed point iteration will converge if the absolute value of the slope of the function never exceeds one. Another condition for convergence is that the function involved decreases distances.
  • #1
snoopies622
846
28
Recently my dad wrote a computer program that finds an approximate solution to the "crossed-ladders problem"

http://en.wikipedia.org/wiki/Crossed_ladders_problem

for inputed ladder lengths and height of ladder intersection. It uses what I just learned is called "fixed point iteration" to find the square of the width (approximately) and then after about ten cycles returns the square root of the result.

We are both impressed with how efficiently it finds so precise an answer, but neither of us completely understands why it works, especially when if one takes a random polynomial function and attempts to find a solution to it by isolating the exponent=1 term, making an initial guess, plugging it into the other side and iterating, the result almost always diverges.

So my question is: under what circumstances will fixed point iteration converge? I've played around with a few sample functions and right now I have the impression that it always works if the absolute value of the slope of the function (with the exponent=1 term removed and its coeffecient made into unity by diving both sides as necessary) never gets bigger than one. But I don't know if that's in fact true.
 
Mathematics news on Phys.org
  • #2
Square root? I guess you are using

[tex]x_{n+1} = \frac 1 2 (x_n + \frac a {x_n})[/tex]

and xn is approximation of square root of a?

As far as I remember this is variant of the Newton method, and Newton method works only if derivative of the function between starting point and solution is larger than 1. Or something close to that, could be it depends on the side from which you approach the solution, my imagination went on strike.
 
  • #3
Unfortunately, LaTeX is no longer displaying correctly on this old computer of mine so I'll have to just type the characters and hope they appear correctly.

The program starts with the relationship

[tex]


\frac {1}{h} = \frac {1}{\sqrt{l^2 - w^2}} + \frac {1}{\sqrt{s^2 - w^2}}


[/tex]

where h is the intersection height, l and s and the lengths of the long and short ladders respectively, and w is the width between the buildings.

It rearranges this to

[tex]

w^2 = s^2 - ( \frac {1}{h} - \frac {1}{\sqrt{l^2 - w^2}} ) ^ {-2}

[/tex]

Then it starts with a guess value for [itex] w^2 [/itex], puts that into the right side of the equation to produce a new [itex] w^2 [/itex], plugs that back into the [itex] w^2 [/itex] on the right, and so on. After about ten cycles it takes the square root of the last value [itex] w^2 [/itex] it found, and that's the approximate value of [itex] w [/itex].

Since this is of the form [itex] a_{n+1} = f (a_n) [/itex] (where [itex] w^2 [/itex] is the [itex] a [/itex]), it's an example of fixed point iteration as I understand it.
 
Last edited:
  • #4
snoopies622 said:
Recently my dad wrote a computer program that finds an approximate solution to the "crossed-ladders problem"

http://en.wikipedia.org/wiki/Crossed_ladders_problem

for inputed ladder lengths and height of ladder intersection. It uses what I just learned is called "fixed point iteration" to find the square of the width (approximately) and then after about ten cycles returns the square root of the result.

We are both impressed with how efficiently it finds so precise an answer, but neither of us completely understands why it works, especially when if one takes a random polynomial function and attempts to find a solution to it by isolating the exponent=1 term, making an initial guess, plugging it into the other side and iterating, the result almost always diverges.

So my question is: under what circumstances will fixed point iteration converge? I've played around with a few sample functions and right now I have the impression that it always works if the absolute value of the slope of the function (with the exponent=1 term removed and its coeffecient made into unity by diving both sides as necessary) never gets bigger than one. But I don't know if that's in fact true.

Fixed point iteration will converge provided the function involved decreases distances. More precisely, the condition is that

[tex]|f(x) - f(y)| < k \; |x - y|[/tex] for all x, y
for some [tex]k < 1[/tex].

Another test that guarantees convergence is that [tex]|f'(x)| < k[/tex] for some [tex]k < 1[/tex].
 
  • #5
Thanks awkward; that's about what I suspected.
 

Related to When will fixed point iteration converge?

1. What is an iteration technique?

An iteration technique is a problem-solving method that involves repeatedly testing and refining a solution until it meets specific criteria or achieves a desired outcome. It is commonly used in mathematics, computer programming, and scientific research.

2. How does an iteration technique work?

An iteration technique typically involves breaking a problem down into smaller, more manageable parts and then using a series of iterative steps to solve each part. The process is repeated until the solution meets the desired criteria. This allows for gradual refinement and improvement of the solution.

3. What are the benefits of using an iteration technique?

One of the main benefits of using an iteration technique is that it allows for continuous improvement and refinement of a solution. It also helps to break down complex problems into smaller, more manageable parts, making them easier to solve. Additionally, an iteration technique can help identify and correct errors in a solution.

4. What are some common types of iteration techniques?

There are several common types of iteration techniques, including trial and error, Newton's method, and gradient descent. Trial and error involves systematically trying different solutions until the desired outcome is achieved. Newton's method uses calculus to find the roots of a function. Gradient descent is a popular optimization technique used in machine learning and artificial intelligence.

5. Are there any drawbacks to using an iteration technique?

While iteration techniques can be very effective in solving problems, they can also be time-consuming and require a lot of trial and error. They may also not be suitable for all types of problems, as some may have more efficient solutions. Additionally, if the iterative steps are not carefully designed, the process may not converge on an optimal solution.

Similar threads

  • General Math
Replies
1
Views
772
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
17
Views
3K
Replies
1
Views
4K
Replies
7
Views
1K
Replies
2
Views
799
  • General Math
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
2
Views
1K
Replies
5
Views
2K
Back
Top