Preconditioning the Steepest Descent Method

  • Thread starter Deimantas
  • Start date
  • Tags
    Method
In summary: However, I will try to summarize the main points.In summary, the method of steepest descent (SDM) is a way to solve a system of equations AX = F. It starts with an initial approximation X0 and iteratively improves it until the desired accuracy is reached. This method works well for symmetric positive definite matrices, but can be slow for non-symmetric or ill-conditioned matrices.Preconditioning is a technique used to improve the convergence of an iterative method, like SDM. It involves pre-multiplying the system of equations by a matrix B (called a preconditioner) in order to transform the original system into a better conditioned one. The choice of B greatly affects the performance of the method.In the case
  • #1
Deimantas
41
0

Homework Statement





Homework Equations



Pk = A*Xk - F

τk = ((A*Xk - F)*(A*Xk - F))/(A*(A*Xk - F))*(A*Xk - F)

Xk+1 = Xk - τk * (A*Xk - F)


The Attempt at a Solution



We are given a system of equations:


AX=F, where A3x3= (2 1 0.95; 1 2 1; 0.95 1 2) and F= (3.95;4;3.95)τ

(The solution is (1;1;1)τ)

We choose the first X freely: X0 = (0;0;0)τ

Therefore:

(1) AX0-F = -F = (-3.95;-4;-3.95)τ

(2) (A*X0 - F)*(A*X0 - F) = 47.205

(3) A*(A*X0 - F) = (-15.6525;-15.9;-15.6525)τ

(4) (A*(A*Xk - F))*(A*Xk - F) = 187.25475

So τ0 = 0.252090

Therefore our new approximate X is:

(5) X1 = X0 - τ0 * (A*Xk - F) = (0;0;0)τ - 0.252090 * (-3.95;-4;-3.95)τ ≈ (0.995754;1.008359;0.995754)τ

We repeat this process with the new approximation of X, until we reach the desired accuracy.

This is the method of steepest descent without preconditioning. Can anyone show me how each step of this algorithm changes with preconditioning applied? Suppose we use a preconditioner matrix B which is equal to the inverse of A (or any matrix that you think would be the best).

I'd highly appreciate any help. I've made a program to calculate solutions of a system of equations using SDM already, and it works fine, I just don't understand how to apply preconditioning. Hopefully the equations I've written are readable, apologies.
 
Last edited:
Physics news on Phys.org
  • #2
Deimantas said:

Homework Statement





Homework Equations



Pk = A*Xk - F

τk = ((A*Xk - F)*(A*Xk - F))/(A*(A*Xk - F))*(A*Xk - F)

Xk+1 = Xk - τk * (A*Xk - F)


The Attempt at a Solution



We are given a system of equations:


AX=F, where A3x3= (2 1 0.95; 1 2 1; 0.95 1 2) and F= (3.95;4;3.95)τ

(The solution is (1;1;1)τ)

We choose the first X freely: X0 = (0;0;0)τ

Therefore:

(1) AX0-F = -F = (-3.95;-4;-3.95)τ

(2) (A*X0 - F)*(A*X0 - F) = 47.205

(3) A*(A*X0 - F) = (-15.6525;-15.9;-15.6525)τ

(4) (A*(A*Xk - F))*(A*Xk - F) = 187.25475

So τ0 = 0.252090

Therefore our new approximate X is:

(5) X1 = X0 - τ0 * (A*Xk - F) = (0;0;0)τ - 0.252090 * (-3.95;-4;-3.95)τ ≈ (0.995754;1.008359;0.995754)τ

We repeat this process with the new approximation of X, until we reach the desired accuracy.

This is the method of steepest descent without preconditioning. Can anyone show me how each step of this algorithm changes with preconditioning applied? Suppose we use a preconditioner matrix B which is equal to the inverse of A (or any matrix that you think would be the best).

I'd highly appreciate any help. I've made a program to calculate solutions of a system of equations using SDM already, and it works fine, I just don't understand how to apply preconditioning. Hopefully the equations I've written are readable, apologies.

The topic is a bit "large" to cover in a Forum like this one. An on-line search using keywords "steepest descent + precondioned" (or "preconditioning") yields many, many articles.
 

Related to Preconditioning the Steepest Descent Method

1. What is preconditioning in the context of the Steepest Descent Method?

Preconditioning is a technique used to improve the convergence rate of the Steepest Descent Method, which is an iterative algorithm used to find the minimum of a function. It involves transforming the original system of equations into a more well-conditioned system, which can lead to faster convergence and improved accuracy.

2. How does preconditioning affect the convergence rate of the Steepest Descent Method?

Preconditioning can significantly improve the convergence rate of the Steepest Descent Method. By transforming the system of equations into a more well-conditioned form, the algorithm is able to take larger steps towards the minimum, resulting in faster convergence and fewer iterations required to reach the solution.

3. What are some common preconditioning techniques used in the Steepest Descent Method?

There are several popular preconditioning techniques used in the Steepest Descent Method, including diagonal scaling, incomplete Cholesky factorization, and multigrid methods. These techniques involve manipulating the original system of equations in various ways to improve the conditioning and convergence rate.

4. When is preconditioning necessary for the Steepest Descent Method?

Preconditioning is not always necessary for the Steepest Descent Method, as the algorithm can still converge without it. However, in cases where the system of equations is highly ill-conditioned, preconditioning can significantly improve the convergence rate and accuracy of the solution.

5. Are there any drawbacks to using preconditioning in the Steepest Descent Method?

While preconditioning can greatly improve the performance of the Steepest Descent Method, it does come with some drawbacks. Preconditioning adds an extra step to the algorithm, which can increase the computational cost. Additionally, choosing the appropriate preconditioner for a specific problem can be challenging and may require some trial and error.

Similar threads

  • Calculus and Beyond Homework Help
Replies
1
Views
3K
  • Calculus and Beyond Homework Help
Replies
1
Views
2K
  • Calculus
Replies
2
Views
3K
Replies
1
Views
4K
Back
Top