Let $A$ a $n\times n$ matrix with known LU decomposition, let $u\in \mathbb{R}^n, v\in \mathbb{R}^{n+1}$.

Show that the number of multiplications and divisions that are needed to get a LU decomposition of the $(n+1)\times (n+1)$ matrix $$\begin{pmatrix}A & u \\ v^T\end{pmatrix}$$ is at most $O(n^2)$.

To get $U$, we have to eliminate $n$ terms below the main diagonal (which is the elements of $u^T$ except the last element of that row). Each elimination requires computing the row...

Number of multiplications and divisions for LU decomposition]]>

Let $A=L^TDL$ be the Cholesky decomposition of a symmetric matrix, at which the left upper triangular $L$ hat only $1$ on the diagonal and $D$ is a diagonal matrix with positiv elements on the diagonal.

I want to show that such a decomposition exists if and only if $A$ is positive definite.

Could you give me a hint how we could show that?

If we suppose that $A=L^TDL$ is the Cholesky decomposition then if $A$ positiv definite the diagonal elements must be positiv and so the...

Such a decomposition exists iff A is positive definite]]>

Let say here the intersection point is 5:

f(g) is for range [0<n<=5]

and

f(x) is for range [5<=n<10]

for f(g) real root using quadratic equation is 4.3 that lies...

quadratic equations intersaction point is minimum instead of roots!]]>

My questions are:

(1) what is ". " dot in the end of the formula and what does it mean?

(2) I want to use it only for real number not complex, so do the formula remain the same?

(3) It states the solution is for this kind of equation\[ ax^3+ax^2+cx+d=0 \] what if I have equation like that \[ ax^3+ax^2-cx+d=0 \]? Do i have...

Help required in understanding Cubic equation formula for polynomials of degree three!]]>

The functions \begin{equation*}f(x)=\frac{1}{3}(x-2)^2e^{x/3} \ \text{ and } \ p(x)=-\frac{2}{3}x+\frac{4}{3}\end{equation*} have exactly two real intersection points at the region $x\geq 0$.

Calculate numerically the area that is between the graphs of these two functions, with accuracy of two decimal digits.

For that do we have to claulate numerically the intersections points, using for example Newton's methos? ]]>

I want to calculate the integral $\int_{-1}^2\sin \left (\pi (t-1)\right )\delta (-t+1)\, dt$. I have done the following so far:

$$\int_{-\infty}^{+\infty}\sin \left (\pi (t-1)\right )\delta (-t+1)\, dt=\int_{-\infty}^1\sin \left (\pi (t-1)\right )\delta (-t+1)\, dt+\int_{-1}^2\sin \left (\pi (t-1)\right )\delta (-t+1)\, dt+\int_2^{+\infty}\sin \left (\pi (t-1)\right )\delta (-t+1)\, dt$$

We have that $\int_{-\infty}^{+\infty}\sin \left (\pi (t-1)\right )\delta (-t+1)\...

Interval with Dirac function in a finite interval]]>

When it is given that a signal $x(t)$ has a real-valued Fourier transformation $X(f)$ then is the signal necessarily real-valued?

I have thought the following:

$X_R(ω)=\frac{1}{2}[X(ω)+X^{\star}(ω)]⟺\frac{1}{2}[x(t)+x^{\star}(−t)]=x_e(t) \\ X_I(ω)=\frac{1}{2i} [X(ω)−X^{\star}(ω)]⟺ \frac{1}{2i}[x(t)−x^{\star}(−t)]=−i⋅x_o(t)$

where $X_R(ω)$ and $X_I(ω)$ are the real and imaginary parts of $X(ω)$, and $x_e(t)$ and $x_o(t)$ are the even and odd parts of $x(t)$, respectively.

So...

Property of real-valued Fourier transformation]]>

A real periodic signal with period $T_0=2$ has the Fourier coefficients $$X_k=\left [2/3, \ 1/3e^{j\pi/4}, \ 1/3e^{-i\pi/3}, \ 1/4e^{j\pi/12}, \ e^{-j\pi/8}\right ]$$ for $k=0,1,2,3,4$.

I want to calculate $\int_0^{T_0}x^2(t)\, dt$.

I have done the following:

It holds that $$\frac{1}{T_0}\int_{T_0}|x(t)|^2\, dt=\sum_{k=-\infty}^{+\infty}|X_k|^2$$ right?

Then do we get $$\int_{T_0}|x(t)|^2\, dt=2\sum_{k=-\infty}^{+\infty}|X_k|^2=2\left [\left(\frac{2}{3}\right...

Calculate the integral using the Fourier coefficients]]>

The function \begin{equation*}f(x)=\frac{x}{e^{x/9}}\cdot \frac{\sin \left (\pi (x-1)\right )}{x-1}\end{equation*} has at exactly one position $\overline{x}>1$ the same value as at the position $x=1$. Determine the position $\overline{x}$ using an iteration method with accuracy of two decimal digits.

I have done the following:

We have that \begin{equation*}f(\overline{x})=f(1)\Rightarrow f(\overline{x})-f(1)=0 \Rightarrow g(x):=f(x)-f(1)\end{equation*}

First we have to...

Determine the position using an iteration method]]>

We have a matrix $A\in \mathbb{R}^{m\times n}$ which has the rank $n$. The condition number is defined as $\displaystyle{k(A)=\frac{\max_{\|x\|=1}\|Ax\|}{\min_{\|x\|=1}\|Ax\|}}$.

I want to show that $k_2(A^TA)=\left (k_2(A)\right )^2$.

We have that $$k_2(A^TA)=\frac{\max_{\|x\|_2=1}\|(A^TA)x\|_2}{\min_{\|x\|_2=1}\|(A^TA)x\|_2}$$

It holds that $$\|(A^TA)x\|_2=\left ((A^TA)x, (A^TA)x\right )=\left (A^TAx\right )^T\left (A^TAx\right )=x^TA^TAA^TAx=\left (Ax\right )^T\left...

Condition number of A'A]]>

I am working on using dimensinal regularization of a loop integral in QED. I don't think the specific application to QED is important, but I will say that the original integral is in 4D Minkowsi space-time.

The example in the text starts with a Wick rotation of an integral in Minkowski space, changing the time coordinate by \(\displaystyle p^0 \to i p^4\). The problem is now to...

Dimensional regularizatoin of an integral]]>

\begin{equation}

x^{k+1}=x^k-\alpha\nabla f (x^k).

\end{equation}

We know that the sequence will converge to a critical point.

Now consider the new function $\tilde{f}(x)=f(x)+x'Ax$ with some...

How the sequence of Gradient Descent algorithm is affected when function is perturbed to be more ste]]>

So the tricky part here is that the force of gravity is non-constant as you drill through the earth. It is near-zero at the center, and climbs back up to $g$ at the surface.

Definitions: let

\begin{align*}

M_e&=\text{total mass of the earth}\\

y&=\text{position of the ball; }y=0\text{ at the center of the earth, and...

Correctness of Ball-Dropping Problem Answer]]>

This problems arises in data compression; consider the bits that make up a file (or a substring of bits of the file) and treat it as a number (i.e. the bits are the binary representation of this number). If we could write a pair function+input(s) whose output happens to be the substring, this...

How to find functions & inputs whose output is a specific number]]>

We consider the initial value problem:

$$x'(t)=-5x(t)-2y(t), t \in [0,1] \\ y'(t)=-2x(t)-100y(t), t \in [0,1] \\ x(0)=1, y(0)=1.$$

I want to solve the above problem using the forward Euler method, the trapezoid method and the backward Euler method and to represent in common graphs the corresponding values of $(x^n)^2+(y^n)^2$.

First of all, the formula for the approximation $y$ that we have using the backward Euler method is the following, right?

[m]y=inv(eye(2)-h*A)*y...

Compare the methods]]>

Thank you for reading my post. I'm stuck with picking the correct 'generator' element in the attached example (Simplex method). As you can see, the solution keeps picking the yellow elements as 'generators', but I don’t understand why we can't choose the purple ones.

Can somebody please explain?

]]>

In section 2.3 it states to use Eq 1 to find kernel function K and bivariate Bandwidth H. Q: Is eq 1 using the data for all cells for the given time period or...

Practical Application of Research Paper help]]>

can you tell me these equations:

A = 6*(f2-f3)/z3+3*(d2+d3); % cubic fit

B = 3*(f3-f2)-z3*(d3+2*d2);

z2 = (sqrt(B*B-A*d2*z3*z3)-B)/A; % numerical error

in matlab fmincg.m

https://github.com/emersonmoretto/mlclass-ex3/blob/master/fmincg.m

come from where??

it is either cubic interpolation or cubic interpolation...i look for

these equation in many website and books numerical optimization but

i do not find these...

extrapolation and interpolation in line search optimization]]>

We have the matrix $$A=\begin{pmatrix}2 & 0.4 & -0.1 & 0.3 \\ 0.3 & 3 & -0.1 & 0.2 \\ 0 & 0.7 & 3 & 1 \\ 0.2 & 0.1 & 0 & 4\end{pmatrix}$$ We get the row Gershgorin circles: $$K_1=\{z\in \mathbb{C} : |z-2|\leq 0.8 \} \\ K_2=\{z\in \mathbb{C} : |z-3|\leq 0.6 \} \\ K_3=\{z\in \mathbb{C} : |z-3|\leq 1.7 \} \\ K_4=\{z\in \mathbb{C} : |z-4|\leq 0.3 \} $$ and the column Gershgorin circles: $$K_1'=\{z\in \mathbb{C} : |z-2|\leq 0.5 \} \\ K_2'=\{z\in \mathbb{C} : |z-3|\leq 1.2 \} \\...

Interval of eigenvalues using Gershgorin circles]]>

Let $S_{X,3}$ be the vector space of cubic spline functions on $[-1,1]$ in respect to the points $$X=\left \{x_0=-1, x_1=-\frac{1}{2}, x_2=0, x_3=\frac{1}{2}, x_4\right \}$$ I want to check if the function $$f(x)=\left ||x|^3-\left |x+\frac{1}{3}\right |^3\right |$$ is in $S_{X,3}$.

We have that \begin{align*}f(x)&=\left ||x|^3-\left |x+\frac{1}{3}\right |^3\right |\\ & =\begin{cases}|x|^3-\left |x+\frac{1}{3}\right |^3 , & |x|^3-\left |x+\frac{1}{3}\right |^3\geq 0...

Is f in the vector space of cubic spline functions?]]>

I want to determine the LU decomposition of

$A=\begin{pmatrix}0 & 2 & 1\\1 & 10 & 1 \\1 & 1 & 1\end{pmatrix}$ with total pivoting.

I have done the following:

The biggest element of the whole matrix is $10$, so we exchange the first two rows and the first two columns and then we get $\begin{pmatrix}10 & 1 & 1 \\2 & 0 & 1\\1 & 1 & 1\end{pmatrix}$.

Applying now the Gauss algorithm we get $\begin{pmatrix}10 & 1 & 1 \\0 & -\frac{1}{5} & \frac{4}{5}\\0 &...

LU decomposition: Total pivoting]]>

I want to calculate the integral $$\int_0^1\frac{1}{x+3}\, dx$$ with the Gaussian quadrature formula that integrates exactly all polynomials of degree $6$.

The gaussian quadrature integrates exactly polynomials $\Phi (x)$ with maximum degree $2n-1$. In this case we consider $n=4$.

The formula is \begin{equation*}\int_{0}^1\frac{1}{x+3}\, dx\approx \sum_{i=1}^nf(x_i)\cdot w_i=f(x_1)\cdot w_1+f(x_2)\cdot w_2+f(x_3)\cdot w_3+f(x_4)\cdot w_4\end{equation*}

For the calculations...

Gaussian quadrature formula]]>

I want to determine an approximation of a cubic polynomial that has at the points $$x_0=-2, \ x_1=-1, \ x_2=0 , \ x_3=3, \ x_4=3.5$$ the values $$y_0=-33, \ y_1=-20, \ y_2=-20.1, \ y_3=-4.3 , \ y_4=32.5$$ using the least squares method.

So we are looking for a cubic polynomial $p(x)$ such that $$\sum_{i=0}^4\left (p(x_i)-y_i\right )^2$$ is minimal, right?

Let $p(x)=a_3x^3+a_2x^2+a_1x+a_0$. Then we get the following sum:

\begin{align*}\sum_{i=0}^4\left...

Least squares method : approximation of a cubic polynomial]]>

Then there is the following question:

Explain why isolated roots are allowed in the weight function.

What exacly is meant by that? Could you explain that to me? What are isolated roots?

]]>

We have the system \begin{align*}&x_1=\left (5+x_1^2+x_2^2\right )^{-1} \\ &x_2=\left (x_1+x_2\right )^{\frac{1}{4}}\end{align*} and the set $G=\{\vec{x}\in \mathbb{R}^2: \|\vec{x}-\vec{c}\|_{\infty}\leq 0.2\}$ where $\vec{c}=(0.2,1)^T$.

I want to show with the Banach fixed-point theorem that the system has a solution in $G$.

I have done the following:

Let \begin{equation*}\Phi (x_1, x_2)=\begin{pmatrix}\left (5+x_1^2+x_2^2\right )^{-1} \\ \left (x_1+x_2\right...

Banach fixed-point theorem : Existence of solution]]>

We have the matrix \begin{equation*}A=\begin{pmatrix}1/2 & 1/5 & 1/10 & 1/17 \\ 1/5 & 1/2 & 1/5 & 1/10 \\ 1/10 & 1/5 & 1/2 & 1/5 \\ 1/17 & 1/10 & 1/5 & 1/10\end{pmatrix}\end{equation*}

I have applied the Cholesky decomposition and found that $A=\tilde{L}\cdot \tilde{L}^T$ where

\begin{equation*}\tilde{L}=\begin{pmatrix}\frac{1}{\sqrt{2}} & 0 & 0 & 0 \\ \frac{\sqrt{2}}{5} & \sqrt{\frac{21}{50}} & 0 & 0 \\ \frac{\sqrt{2}}{10} & \frac{4\sqrt{42}}{105} & \frac{2\sqrt{1155}}{105} &...

The decomposition for a symmetric positiv definite matrix is unique]]>

We have the following two versions of Gram Schmidt orthogonalization method:

Show that both algorithms are mathematically equivalent, i.e. $s_{ji}=r_{ji}$ for all $j\leq i$ and $\tilde{q}_i=q_i, i=1, \ldots , n$ for the output vectors.

I have done the following:

The difference of the two algorithms is the inner for loop, the definition of $r_{ji}$ and $s_{ji}$ respectively, or not?

If we show that these two are the same then it...

Two versions of Gram Schmidt orthogonalization method]]>

We have the tridiagonal matrix $A=\begin{pmatrix}2 & 1 & \ldots & 0 \\ 1 & 2 & 1 & \ldots \\ \ldots & \ldots & \ldots & \ldots \\ 0 & \ldots & 1 & 2\end{pmatrix}$. I want to show that it is positive definite.

For that it is given the following hint:

1) $\langle x, Ax\rangle \geq 0$

2) $\langle x, Ax\rangle =0 \Rightarrow x=0$

I have done the following:

The $i$-th component of the vector $Ax$ is \begin{equation*}(Ax)_i=x_{i-1}+2x_i+x_{i+1} , \ i=1, 2, \ldots , n \ \text{...

Show that the tridiagonal matrix is positive definite]]>

We have the matrix $$A=\begin{pmatrix}1 & -2 & 1 \\ 3 & -1 & 2 \\ -2 & -2 & 1\end{pmatrix}$$ I want to apply the LR decomposition with column pivoting.

First we permutate the first two rows and we get $$A=\begin{pmatrix}3 & -1 & 2 \\ 1 & -2 & 1 \\ -2 & -2 & 1\end{pmatrix}$$ Then we apply the Gauss algorithm and we get $$A=\begin{pmatrix}3 & -1 & 2 \\ 0 & -5/3 & 1/3 \\ 0 & -8/3 & 7/3\end{pmatrix}$$ Then the largest value of the submatrix in absolute value is $8/3$ so we...

LR decomposition with column pivoting]]>

Use the Bisection, Newton, and Secant methods to solve (to at least 8

signicant figures) the equation

sin(x) = 0.98 cos(2x2)over the interval [0, 2.5], in radiant units. For the Newton method, try with

several different initial guesses including x0 = 1. With the Secant method,

use the same initial values as for the Bisection, as well as other appropriate

values. Comment on the results by comparing the effectiveness of

each method for this problem. In particular...

root finding]]>

Let $\gamma\in \mathbb{R}$ and $A=\begin{pmatrix}1 & \gamma \\ 0 & 1\end{pmatrix}$.

I want to calculate the condition numbers $\text{cond}_1(A) , \text{cond}_2(A) ,\text{cond}_{\infty}(A) $.

The determinant of the matrix $A$ is equal to $\det (A)=1\neq 0$, so the matrix $A$ is invertible.

The inverse matrix is $A^{-1}=\frac{1}{\det (A)}\begin{pmatrix}1 & -\gamma \\ 0 & 1\end{pmatrix}=\begin{pmatrix}1 & -\gamma \\ 0 & 1\end{pmatrix}$.

- ...

Condition number - Relative error]]>

I want to expansion this function \(\displaystyle cx(x-l)\) to \(\displaystyle \sum_{n=-\infty}^{\infty}a_{n}e^{-\alpha x}\)

how can i do it? you have a idea]]>

$$

|{df(t)}/dt| \le 2AB\pi

$$

where A=sup$|f(t)|$ and B is Bandwidth f(t),

the question is:

Is there a relationship for discrete function x[n] like this?

$$

|x[n] -x[n-1] | \le\ \mu\ W

$$

where

$$

X[k] = \sum\limits_{k = 0}^{N - 1} {x[n]{e^{ - j\frac{{2\pi }}{N}nk}}}

$$

is DFT for x[n] , X[k]=0 for k> W]]>

\[

\begin{align}

\frac{dr}{dt}= &\ - \left(\alpha+\frac{\epsilon}{2}\right)r + \left(1-\frac{\epsilon}{2}\right)\alpha p - \alpha^2\beta r p + \frac{\epsilon}{2} \\

\frac{dp}{dt}= &\ \left(1-\frac{\epsilon}{2\alpha}\right)r - \left(1+\frac{\epsilon}{2}\right)p - \alpha\beta r p + \frac{\epsilon}{2\alpha}

\end{align}

\]

with initial conditions $r(0)=1, p(0)=0$ and $\epsilon$ is a small parameter. After...

Regular perturbation nonlinear problem]]>

\(\displaystyle (a^2-b^2)\int_{0}^{P} J_{v}(ax)J_{v}(bx)x\,dx=P\left\{bJ_{v}(aP)J^{'}_{v}(bP)-aJ^{'}_{v}(ap)J_{v}(bP)\right\}\)

when \(\displaystyle J^{'}_{v}(aP)=\d{J_{v}(ax)}{(ax)},(x=P)\)

I don, have idea]]>

using \(\displaystyle g(x,t)=g(u+v,t)=g(u,t)g(v,t)\)

to show that \(\displaystyle J_{0}(u+v)=J_{0}(u)J_{0}(v)+2\sum_{s=1}^{\infty}J_{s}(u)J_{-s}(v)\)

___________________________________________________________________________________________

my solution

\(\displaystyle g(u+v,t)=e^{\frac{u+v}{2}(t-\frac{1}{t})}\)

\(\displaystyle g(u+v,t)=e^{\frac{u}{2}(t-\frac{1}{t})}\cdot e^{\frac{v}{2}(t-\frac{1}{t})}\)...

Bessel funion]]>

and

\(\displaystyle \left| J_{0}(x) \right|\le 1 \) and \(\displaystyle \left| J_{n}(x) \right|\le \frac{1}{\sqrt{2}} \)

how to show that

1=\(\displaystyle (J_{0}(x))^{2}+2(J_{1}(x))^{2}+2(J_{2}(x))^{2}+...\)

I don't have idea]]>

I'd like to ask for a help about how to compute accurately functions which has very intense oscillations. My example is to estimate

\(\displaystyle I = \int_0^{\infty} \sin(x^2) dx= \int_0^{\infty}\frac{\sin(t)}{2\sqrt{t}} dt\).

I tried trapezoid rule over one oscillation at a time, but result is poor. My next though is to collect positive parts and negative parts together and add them later on in the code...

Any comments?

Thank you!]]>

I am trying to understand the multiple grid method.

The idea is to use at each step different grids to solve a linear system of differential equations, so that we get the best approximation as possible.

It holds that the finer the discretization, the more accurate the solution ist and the coarser the discretization, the faster the process converges.

The multiple grid method combines these two grids, to get an accurate solution with fast convergence.

Is the idea correct...

Multiple grid method]]>

Hi, this is my first post with a problem that I have during my Msc Project.

I will briefly discuss my project and the reason why I would like to solve this problem, if you do not want to read this part you can skip it.

I am doing experimental research on the scaled laboratory setup where I measure the changes in bottom profiles of a sediment layer under a oscillating fluid flow. Via a pattern matching technique I...

Geometry problem]]>

We consider the initial value problem

$$x'(t)=-y(t), t \in [0,1] \\ y'(t)=x(t), t \in [0,1] \\ x(0)=1, y(0)=0$$

I want to solve approximately the above problem using the forward Euler method in uniform partition of 100 and 200 points.

I have written the following code in matlab:

Code:

```
N=100;
h=1/N;
y=zeros(N);
A=[0 -1;1 0];
for (i=1:100)
y=(eye(N,N)+A*h)*y
i=i+1;
end
y
```

I cannot solve this equation for t. It is the final sequence of a number of equations in a book about modelling athletic performance using bioenergetics. I had a high school maths education 40 years ago and I’m stumped. I have tried Mathway app to solve equations but I think requires a human touch... The author says itÂ’s easy to create a predicted finishing time “just solve for T”.

P=...

Stuck solving a complex equation for T]]>

\(\displaystyle \begin{cases} \dot{q} = a \left( 1 - q^2 \right) \\ \dot{a} = - \alpha - a^2 q\end{cases} \qquad \alpha \in (0, 1 ) \)

I've looked into this ODE system about 7 months now, but I've not got anything promising how to write down the solution. I'm mostly interested in \(\displaystyle q\)-serie. (To those of you who are now thinking why I'm doing something like this: I'm too much interested in physics and projectile motion. This ODE system comes from quadratic air resistance when...

Looking for a serie solution for a nonlinear ODE system]]>

I'm having a problem, banal in form, but perhaps complex in practical application.

Let's suppose we have 300 functions \(\displaystyle y = f (x)\) of random trend and we want to group these functions into 10 subgroups (each group consisting of 30 functions).

However, this grouping must not be random, but such that the sum of the areas subtended by these functions is the 'minimum' possible in each group (there should be a sort of balance between the 10 soubgroups, preferably...

Find the best function grouping with an iterative method]]>

Let $G$ be the iteration matrix of an iteration method. So that the iteration method converges is the only condition that the spectral radius id less than $1$, $\rho (G)<1$, no matter what holds for the norms of $G$ ?

I mean if it holds that $\|G\|_{\infty}=3$ and $\rho (G)=0.3<1$ or $\|G\|_{1}=1$ and $\rho (G)=0.1<1$ in these both cases the iteration method converges, or not?

I have also an other question. Let $A$ be a symmetric matrix. If $\|A\|=0.01$ then there...

Convergence of iteration method - Relation between norm and eigenvalue]]>

Loosely speaking, the theorem enables us to say that the Lyapunov exponents exist almost everywhere in the phase space under some measure theoretic/ergodic requirements (?).

I'm curious about what happens when we study the Lyapunov exponents on a subspace of our phase space. I guess a...

Oseledet's ergodic theorem]]>

(Not sure which forum to pick for this question. This looked like the best one. I apologies if it is not)

I have a number of functions (say m functions) with integer domain. All functions are increasing. (Increasing in the sense not decreasing, $f(n+1) \ge f(n)$.) I want an algorithm to find $n_i$ for $1 \le i \le m$ such that $\sum_i^m n_i \le N$ that maximizes $\sum_i^m f(n_i)$.

$m$ is usually small (2 or 3) but can be larger as well and $m$ = 2 can be solved reasonably fast by...

Algorithm: maximize sum of increasing functions]]>

We want to construct a linear spline that at the points $x_0=-1$, $x_1=0$, $x_2=1$ has the values $b_0=0$, $b_1=1$ and $b_2=3$.

The spline should get from $S_{x,1}$ with $x=\{x_0, x_1, x_2\}$ to $[-1,1]$.

Do we want to find a function of the following form:

$$s(x)=\begin{cases}a_1x+b_1 & x\in [x_0, x_1] \\ a_2x+b_2 & x\in [x_1, x_2]\end{cases}$$

that satisfies the given values?

B-splines belong to a specific category of splines, right? What are the...

Construct a linear spline]]>

We have the function $f(x)=x^5-\frac{5}{16}$.

I have approximated the root of that function using three steps of Newton's method with initla value $x_0=\frac{1}{2}$ :

\begin{align*}x_1&=x_0-\frac{f(x_0)}{f'(x_0)}\approx \frac{7}{5} \\ x_2&=x_1-\frac{f(x_1)}{f'(x_1)} \approx 1.1362692628 \\ x_3&=x_2-\frac{f(x_2)}{f'(x_2)}\approx 0.9465088238\end{align*}

So, $x_3=0.9465$ is an approximation of the root of $f(x)=x^5-\frac{5}{16}$.

Now, I want to write the Newton's method for...

Fixed point iteration: g is a contraction mapping]]>

I am looking at the Gauss Quadrature to approximate integrals. I haven;t really understood the meaning of the weighting function. Could you explain that to me?

At each case, the points that we need depend on what weighting function we have, so which polynomials we consider or not?

For example, if we have the weighting function $w(x)=1$ we use the Legendre polynomials, and so their roots are the points that we need.

If we have $w(x)=\frac{1}{\sqrt{1-x^2}}$ we use the...

Gauss Quadrature]]>

Is there any requirement for these periodic points to be unique?

For example, what if every neighborhood contains a periodic point (that we know about) which is part of the same periodic orbit. Do we still say that orbits are dense? Or are they dense in a trivial sense.

Thanks!]]>

First, I want to calculate the solution using the Gauss algorithm with complete pivoting, with accuracy $\text{eps}=5\cdot 10^{-5}$ and floating-point arithmetic with $4$ decimal places.

I have done the following:

The maximal element of the matrix is $a_{3,3}$. We...

System of equations - Relative error]]>

Let $A = QR$, where $Q$ is an orthogonal ($m\times m$)−matrix and $R$ is an upper ($m\times n$)-triangular matrix of rang $n$ ($m>n$).

I want to show that $$\min_{x\in \mathbb{R}^n}\|Ax-y\|_2=\|(Q^Ty)_{n+1}^m\|_2, \ \forall y\in \mathbb{R}^m$$

It is $(a)_k^l=(a_k, \ldots , a_l)^T$ if $a=(a_1, \ldots , a_l)^T\in \mathbb{R}^l$.

I have done the following:

\begin{align*}\min_{x\in \mathbb{R}^n}\|Ax-y\|_2&=\min_{x\in \mathbb{R}^n}\|QRx-y\|_2=\min_{x\in...

Minimum of the 2-norm]]>

We have \begin{equation*}A:=\begin{pmatrix}-5.7 & -61.1 & -32.9 \\ 0.8 & 11.9 & 7.1 \\ -1.1 & -11.8 & -7.2\end{pmatrix} \ \text{ and } \ z^{(0)}:=\begin{pmatrix}1\\ 1 \\ 1\end{pmatrix}\end{equation*}

I want to approximate the biggest (in absolute value) eigenvalue of $A$ with the power method (4 steps using the infinity norm).

I have done the following:

From the power method we have that $z^{(k+1)} = \frac{Az^{(k)}}{\|Az^{(k)}\|}$ and it holds that $\|Az^{(k)}\|\rightarrow...

Approximation of eigenvalue with power method]]>

At the block deflation it holds for a non-singular Matrix $S$ \begin{equation*}SAS^{-1}=\begin{pmatrix}C & D \\ O & B\end{pmatrix}\end{equation*} where $O$ is the zero matrix.

It holds that $\sigma (A)=\sigma(B)\cup \sigma (C)$, where $\sigma (M)$ is the set of all eigenvalues of a Matrix $M$.

Let $v_1, \ldots , v_m$ be linearly independent vectors such that $Av_j\in \text{span}\{v_1, \ldots , v_m\}, j=1,\ldots , m$.

I want to use these vectors to construct a matrix $S$, with...

How can we construct the matrix S ?]]>

Let $S_{X,3}$ be the vector space of the cubic splines functions on $[-1, 1]$ with the points \begin{equation*}X=\left \{x_0=-1, \ x_1=-\frac{1}{2},\ x_2=0,\ x_3=\frac{1}{2}, \ x_4=1\right \}\end{equation*}

I want to check if the following function are in $S_{X,3}$.

- $f_1(x):=|x|^3$

- $f_2(x)=\left (x-\frac{1}{3}\right )_+^3$

- $f_3(x)=-x+x^3+3x^5$

- $f_4(x)=\sum_{n=0}^3a_nx^n$, $a_n\in \mathbb{R}, n=0, \ldots , 3$

We have to check at...

Cubic splines]]>

We have the function $f (x) = e^{\lambda x}$ on an interval $[a, b] , \ \lambda \in \mathbb{R}$.

I want to show that the remainder $R_n (x) = f (x)- p_n (x)$ at the lagrange interpolation of $f (x)$ with $n+1$ points from $[a, b]$ for $n \rightarrow \infty$ converges uniformly to $0$.

For each $n$ the points can be chosen arbitrarily in $[a, b]$.

The remainder is defined as...

Remainder converges uniformly to 0]]>

I realise that for these sorts of measures there is often some pathological or special case getting in the way. Is that what is happening here?

Thanks .]]>

Let A be a regular ($n\times n$)-Matrix, for which the Gauss algorithm is possible.

If we choose as the right side $b$ the unit vectors $$e^{(1)}=(1, 0, \ldots , 0)^T, \ldots , e^{(n)}=(0, \ldots , 0, 1 )^T$$ and calculate the corresponding solutions $x^{(1)}, \ldots , x^{(n)}$ then the inverse matrix is $A^{-1}=[x^{(1)}, \ldots , x^{(n)}]$.

We can calculate the inverse with $n^3+O(n^2)$ operations. (1 operation = 1 multiplication or division)

If we calculate the solutions...

Calculation of the inverse matrix - Number of operations]]>

I want to prove the following relations of condition number:

- $\operatorname{cond}(\alpha A)=\operatorname{cond}(A)$. The matrixnorm is submultiplicativ.

- $\operatorname{cond}_2(U)=1$ if $U$ is an orthogonal matrix.

- $\operatorname{cond}_2(UA)=\operatorname{cond}_2(A)$, $U$ is orthogonal.

- $\operatorname{cond}_2(A)\leq \operatorname{cond}_F(A)\leq n\operatorname{cond}_{\infty}(A)$.

I have done the following :

- I have proven this...

Relations of condition number]]>

I'm lost in at the first line were it is stated that a given collection of polygons need not be convex. How is this possible? I am trying to understand translation surfaces from the perspective of dynamical systems, specifically, billiard systems. In this setting we can 'unfold' the trajectory of a point particle. But surely...

Translation surfaces]]>

Perhaps here, I'll get an answer.

https://mathoverflow.net/questions/282048/a-lemma-on-convex-domain-which-is-a-lipschitz-domain

[

[box=yellow]I am reading the following paper:

Whitney Estimates for Convex Domains with...

A paper on Approximation Theory.]]>

Can anyone help in the solution of this problem? how can i determine the zero x*??]]>

Given the 3 ODEs of the SIR model

dS/dt = -\betaSI

dI/dt= -\betaSI - \gammaI

dR/dt = \gammaI

Ive been asked to produce in excel Eulers method for axproximate solutions. Given some initial values for S(0) and I(0) as well values for the constants for \beta and \gamma, and increments of time (t)

What I would like to know, is do I need to use Matrix/Linear Algebra to produce a generic solution for the system of...

SIR Model]]>

i'm trying to find a solution to the following that I can stick into either excel or a VBA script. It has been 25 years since I looked at any serious maths and i'm stumped. I can find and digest e^-(n^2y) but cant work out the n^-2 addition. Happy to be walked though the steps if it helps me get a better understanding, for those interested it is part of a solution to determine the ability for gas to flow though...

Solution to Infinite series for E^(n^2x)]]>

Suppose ee have a transportation problem and we get a first solution by the North-West-Corner method. Then we want to optimize the solution by the stepping stone algorithm.

We pick an empty cell and we want to see how the cost changes when we would add at this cell an unit.

Then we pick the cell with the most negative result.

The total result/cost always be optimized after that step, right?

Suppose we have the following tableau at one step:

Transportation problem]]>

\begin{align}

&\int_0^x dy\,y^2 \cos(y^2) C^2 \!\!\left(\!\frac{\sqrt{2}\,y}{\sqrt\pi}\!\right)

\end{align}

In reality only the first one is causing me troubles, however I have pasted the entire expression as it might lead to some cancellations. Any help would be welcome. $C()$ is the Fresnel integral.]]>

$$\Bigl[\frac {d^2}{dt^2}+ \omega_{0}^{2}\Bigl] G(t, t') = \delta(t-t')$$

Now I found out the Fourier transform of $G(t, t')$ to be

$$G(\omega)= \frac{1}{2\pi} \frac{1}{\omega_{0}^{2}-\omega^2}$$ which has poles at $\omega=\pm\omega_{0}$

now how can i identify the way the poles can be shifted , like shifting $\omega_{0}\rightarrow \omega_{0}+ i\epsilon$

my instructor said there are four ways to shift. can...

pole shifting for Fourier transform]]>

If \(\displaystyle f(t) = f(t+T)\) then we can find the Fourier transform of \(\displaystyle f(t)\) through a sequence of delta functions located at the harmonics of the fundamental frequency modulated by the Fourier Transform of the restricted function \(\displaystyle F_r(\omega).\)

\(\displaystyle f(t) \leftrightarrow F_r(\omega) \frac{2\pi}{T} \sum_{n=-\infty}^{\infty} \delta(\omega -...\)

Fourier Transform of Periodic Functions]]>

The Hamiltonian is defined as

\begin{align*}

\mathcal{H} = \dot{q}_ip_i - \mathcal{L}

\end{align*}

(where the summation convention is implied), and solving for $\mathcal{L}$, we have

\begin{align*}

\mathcal{L} = \dot{q}_ip_i - \mathcal{H}

\end{align*}

Taking the partial derivative with respect to...

Derivation of Euler-Lagrange equations w/ Hamilton's equations]]>

"The equations below come from applying the law of mass action to two reactions.

$$X'=aXY \\

Y'=bYZ+cZ \\

Z'=dYZ+eZ$$

Find the two reactions and determine how the coefficients

I have got the first reaction as $X+Y\overset{a}\rightarrow Y$ with being negative. For the second...

Law of Mass Action]]>

I was doing some approximation to solve another problem, but got stuck when trying to figure out a suitable inverse functions for this:

\(\displaystyle a = \frac{\cos x}{3x^2 - \pi^2}\), where \(\displaystyle 0 \le x \le \pi\).

What I need is the two functions \(\displaystyle x(a)\) at least near \(\displaystyle a \approx -0.086 \pm 0.01\) but I'm not quite sure how to do it well.

Thus far I tried some sort of numerical way, meaning that I put \(\displaystyle a = \) something and made a table of the...

Finding Inverse Function]]>

It's some kind of series expansion of Kepler's equation to obtain an equation that aproximates E as function of M and e.

So Kepler's equation is written in this form:

x = e sin (M + x) with x = E - M

x is expressed as power...

Eccentric anomaly as function of the eccentric and the mean anomaly]]>

How can we make a mathematical model for tracking micronutrients in a anaerobic system?]]>

I have an FEA applicaton where I need to apply a total force acing vertically upward but to be distributed over half a cylinder (1st n 2nd quadrant). This half cylinder i representd by a set of nodes between [tex]0 \le \theta \le 180 [/tex]

The force will be maximum at 90 degrees and 0 at 0/180 degrees. I thought that the equation in cylindrical coordinates would work like

y=F*cos(theta)/n where n is the number of nodes.

The force remains constant into the page as it acts over...

Equation for a distributiong force on a half cylinder]]>

I am trying to understand numerical analysis on my freetime and today I studyed how to solve \(\displaystyle y' = \frac{x^2}{1 + y\sin (y^2)}\), with initial value \(\displaystyle y(0) = 0\). I asked myself two simple questions: What is \(\displaystyle y(1.5)\) and what is \(\displaystyle y(2.5)\)?

As for to check the answers, I solved the ODE. In implicit form one can write \(\displaystyle 2x^3 - 3 = 6y - 3\cos (y^2)\). Now, when \(\displaystyle x = 1.5\), one can solve the \(\displaystyle y \approx 0.941265\) with no...

About solving ODE numerically]]>

in numerical fluid dynamics there are field methods like finite-volume, finite-element, etc. and particle methods like Smoothed-Particle-Hydrodynamics – SPH and others. Both approaches have advantages and disadvantages depending on the application. Both, in principal completely different approaches can be combined using a Voronoi-Diagram. These Voronoi-Cells which behave like particles fill out the complete regarded space and are moving according to the physical laws (conservation of...

combination of field and particle methods for fluid dynamics]]>

I'm currently working with

$$\left \{ \begin{array}{ll} 16x^4-40ax^3+(15a^2+24b)x^2-18abx+3b^2 = 0 \\ 4x^4+5a s \sqrt{v} x^2 - 8 s \sqrt{v} x^3 - b s \sqrt{v} x > 0 \end{array} \right.$$

If it would help $a<0$ and $s<0$.

Thanks!]]>

I did some searching and found: Calculating the Center of Thrust on Multirotors | Flite Test

Currently, there is no way to tell how much lift a given helicopter blade has.

What is known is:

- how heavy the ship is

-the location of the center of mass

- the...

Starting with center of thrust to find amount of force on each point?]]>

I am struggling to see how eqn 4.1.17 is determined using eqn 4.1.10 for problem 4.1.2. It is not clear to me what [tex]\frac{d\phi}{dt}[/tex] is.

I have inserted the eqns I think might be useful into the one jpeg. Any ideas? Sorry I cant make the picture any clearer given the size limit.

Thanks]]>

Given the problem $$-u''(x)+q(x)u(x)=f(x), 0 \leq x \leq 1, \\ u'(0)=u(0), \ \ u(1)=0$$ where $f,g$ are continuous functions on $[0,1]$ with $q(x) \geq q_0>0, x \in [0,1]$. Let $U_j$ be the approximations of $u(x_j)$ at the points $x_j=jh, j=0, 1, \dots , N+1$, where $(N+1)h=1$, that gives the finite difference method $$-\frac{1}{h^2}\left (U_{j-1}-2U_j+U_{j+1}\right )+q(x_j)U_j=f(x_j), \ \ 1 \leq j \leq N \\ \frac{1}{h}(U_1-U_0)-U_0=\frac{1}{2}h\left (q(x_0)U_0-f(x_0)\right...

How do we get this equation?]]>

a

a

(K and g are constants; K = -0,02, g =9,82)

I cand write these two as 2 different differential equations:

v'x(t) =...

Solving 2nd order differential equation numerically]]>

We consider the initial - boundary value problem

$$u_t(t,x)=a(t,x) u_{xx}(t,x)-c(t,x) u(t,x) \forall t \in [0,T_f], x \in [a,b] \\ u(0,x)=u_0(x) \forall x \in [a,b] \\ u(t,a)=0=u(t,b) \forall t \in [0,T_f]\\ a,c \in C^1, a(t,x)>0, c(t,x) \geq 0$$

$$\tau=\frac{T_f}{N_t}, t_n=n \tau, n=0,1, \dots, N_t$$

We consider the explicit Euler method.

The sheme is $u_i^{n+1}= \mu a_i^n u_{i+1}^n +(1- 2 \mu a_i^n - \tau c_i^n) u_i^n + \mu a_i^n u_{i-1}^n$

$u_i^n=u_0(x_i)$ for $n=0...

How do we get the right constant?]]>

I want to find the expression(s) for the Kinetic energy matrix elements for the Laguerre mesh to be used in solving the 1-D Schrodinger eqtn. - so that I can write a fortran program to calculate the elements.

I will assume the potential V everywhere = 0, then the Schrodinger eqtn I want to solve is...

Equations for Lagrange-Laguerre mesh]]>

I have written a code to approximate the solution of the heat equation. I want to consider uniform partitions in order to approximate the solution of the given boundary / initial value problem.

So we partition $[a,b]$ in $N_x$ subintervals with length $h=\frac{b-a}{N_x}$, where the points $x_i, i=1, \dots ,N_x+1$, are given by the formula $x_i=a+(i-1)h$, and so we have $a=x_1<x_2< \dots <x_{N_x}<x_{N_{x+1}}=b$ and respectively we partition $[0,T_f]$ in $N_t$ subintervals of...

How do we choose the number of subintervals?]]>

We are given the boundary / intial value problem for the heat equation:

$\left\{\begin{matrix}

u_t(t,x)=u_{xx}(t,x), \ \ x \in [a,b], \ \ t \geq 0\\

u(0,x)=u_0(x), \ \ \forall x \in [a,b] \\

u(t,a)=u(t,b)=0, \ \ \forall t \geq 0

\end{matrix}\right.$

I have written a code to approximate the solution of the problem.

How do we calculate the order of accuracy of the finite difference method backward euler?

I have found the error $$E^n=\max_{1 \leq i \leq N_x+1}|u^n_i-u(t_n...

Order of accuracy]]>

I want to show that for the Crank-Nicolson method, the following holds:

$|T_i^{n+\frac{1}{2}}| \leq \frac{\tau^2}{12} M_{ttt}+\frac{h^2}{12}M_{xxxx}$,

where $M_{ttt}=||u_{ttt}||_{\infty}, M_{xxxx}=||u_{xxxx}||_{\infty}$

and $T_{i}^{n+\frac{1}{2}}=\frac{u(t_{n},x_i)-u(t_{n-1},x_i)}{\tau}-\frac{1}{2} \frac{u(t_{n+1},x_{i+1})-2u(t_{n+1},x_i)+u(t_{n+1},x_{i-1})}{h^2}-\frac{1}{2} \frac{u(t_n,x_{i+1})-2u(t_n,x_i)+u(t_n,x_{i-1})}{h^2}$

I found the following using Taylor...

Consistency error]]>

We consider a uniform partition such that $[0,T_f]$ and $[a,b]$

$h=\frac{b-a}{N_x+1}, \tau=\frac{T_f}{N_t}$

$x_i=a+ih, i=0,1, \dots, N_x+1$

$t_n=n \tau, n=0,1, \dots, N_t$

$u_t-u_{xx}=0 \\ u(t=0,x)=u_0(x) \\ u(t,a)=0 \forall t \\ u(t,b)=0 \forall t$

$u_t(t,x) \approx \frac{u(t,x)-u(t-\tau,x)}{\tau} \\ u_{xx}(t,x) \approx \frac{u(t,x-h)-2u(t,x)+u(t,x+h)}{h^2}$

$U_i^n \approx u(t^n, x_i)$

$U_i^{n+1}-U_i^n= \mu (U_i^{n+1}- 2...

How to get the desired upper bound]]>

I have to show that the following difference quotients are approximations of $f'''(x)$.

$$\frac{f(x+3h)-3f(x+2h)+3f(x+h)-f(x)}{h^3} \\ \frac{f(x+2h)-2f(x+h)+2f(x-h)-f(x-2h)}{2h^3}$$

Which approximation is more accurate? Justify your answer.

I found the Taylor expansion of $f(x+3h) , f(x+2h), f(x+h)$ and found that

$$\left| \frac{f(x+3h)-3f(x+2h)+3f(x+h)-f(x)}{h^3}-f'''(x) \right| \leq \frac{22}{4} h ||f^{(4)}||_{\infty}$$

Have we shown now that...

Difference quotients]]>

We consider the finite difference method for the approximation

$\left\{\begin{matrix}

-u''(x)+q(x)u(x)=f(x)\\

u'(a)=u'(b)=0

\end{matrix}\right.$

and let $K$ be the $(N+2) \times (N+2)$ matrix of the method. Let $v \in \mathbb{R}^{N+2}, v=\begin{pmatrix}

v_0\\

v_1\\

\dots\\

\dots\\

\dots\\

v_{N+1}

\end{pmatrix}$

If $(Ku)_i \leq 0 \forall i=0,1, \dots, N-1$ then $\max_{1 \leq i \leq N} \{ v_i \} \leq \max \{ v_0, v_{N+1},0\} (\star)$

Proof:

We suppose that...

Finite difference method]]>

LMM is totally new to me, but after a few days of searching through some article databases (Scopus, IOPscience etc.) I haven't found anything that makes enough sense to me; (I did get the idea that Laguerre mesh's are...

Lagrange mesh method]]>

Consider the boundary value problem

$\left\{\begin{matrix}

- \epsilon u''+u'=1 &, x \in [0,1] \\

u(0)=u(1)=0 &

\end{matrix}\right.$

where $\epsilon$ is a positive given constant.

I have to express a finite difference method for its numerical solution.

How can we know whether it is better to use non-uniform partition or uniform partition?]]>

[a1a1×b1a2a2×b2......a16a16×b16]6×16⋅⎡⎣⎢⎢⎢⎢⎢⎢f1f2..f16⎤⎦⎥⎥⎥⎥⎥⎥16×1=⎡⎣⎢⎢⎢⎢⎢⎢⎢⎢00constant000⎤⎦⎥⎥⎥⎥⎥⎥⎥⎥6×1

[a1a2...a16a1×b1a2×b2...a16×b16]6×16⋅[f1f2..f16]16×1=[00constant000]6×1

where a1a1...a16a16 and b1b1...b16b16 are already known 3-Dimensional constant vectors where as f1f1...f16f16are

scalar values. Additional 16 equations are known which are non linear functions are given as

g1(f1,f2,x1),g2(f1,f2,x1)

g1(f1,f2,x1),g2(f1,f2,x1)...

How to solve a set of linear and non -linear equations using Gauss Newton method?]]>

I want to solve numerically the following boundary value problem:

$\left\{\begin{matrix}

-u''+qu=f & , x \in [a,b]\\

-u'(a)+d_1 u(a)=0 & \\

u'(b)+d_2 u(b)=0 &

\end{matrix}\right.$

where $q(x) \geq 0 \forall x \in [a,b], d_1, d_2 \geq 0$.

We consider the uniform partition of $[a,b]$ with step $h=\frac{b-a}{N}$.

The approximations $U_i$ of $u(x_i), i=1, \dots, N+1$ have to be calculated with the following finite difference method:

$\\ \frac{2}{h^2}(U_1-U_2)+2...

Finite difference method]]>

We are looking for a periodic function $u \in C^2(\mathbb{R})$ with period $(b-a)$

$$-u''+qu=f \text{ where } q,f \text{ periodic functions with period } (b-a) \\ u(a)=u(b) \\ u(x)=u(x+(b-a))$$

$x_i=a+ih \\ h=\frac{b-a}{N+1}$

$\mathbb{R}_{\text{per}}^{N+1}=\{ U=(u_i)_{i \in \mathbb{Z}}: u_i \in \mathbb{R} \text{ and } u_{i+N+1}=u_i, i \in \mathbb{Z}\}$

$-\frac{u_{i-1}-2u_i+u_{i+1}}{h^2}+q(x_i) u_i =f(x_i), i=0,1, \dots, N (\star)$

$u_{-1}=u_N...

Periodic problem]]>

We define $\delta_{h,-,2} f(x) :=- \left( \delta_{h,-}+\frac{h}{2} \delta_{h,-}^2 \right) f(x)=\frac{1}{2h} \left( -f(x-2h)+4f(x-h)-3f(x)\right)$.

Let $f \in C^3[a,b]$. Then:

$$| \delta_{h,-,2} f(x)- f'(x)|\leq h^2 ||f'''||_{\infty}$$

I have tried the following:

$$f(x-h)=f(x)-h f'(x)+\frac{h^2}{2} f''(x)-\frac{h^3}{6} f'''(\xi_1), \xi_1 \in (x-h,x) \\ f(x-2h)=f(x)-2hf'(x)+2h^2 f''(x)-\frac{4}{3} h^3 f'''(\xi_2), \xi_2 \in (x-2h,x)$$

So $\delta_{h,-,2}...

How can we show the bound?]]>

With what order of accuracy does the divided difference $\frac{f(x+2h)-f(x)}{2h}$ approximate the derivative?

I have tried the following:

$$f(x+2h)=f(x)+ 2h f'(x)+ 2h^2 f''(\xi) , \xi \in (x,x+2h)$$

$$\frac{f(x+2h)-f(x)}{2h}=\frac{2h f'(x)+ 2h^2 f''(\xi)}{2h}=f'(x)+hf''(\xi)$$

$$\left|\frac{f(x+2h)-f(x)}{2h}-f'(x) \right|=h f''(\xi)$$

Thus the order of accuracy is $1$.

Am I right? ]]>