Here's how I'd formulate it. Let ##C## be a set of positive numbers. Define ##C_n = \{x\in C \mid x> 1/n\}##. Then ##C_n## is a monotone increasing sequence w.r.t inclusion and ##C = \bigcup C_n##.
Note that for each ##n\in\mathbb N## we have
\sum _{x\in C} x \geqslant \sum _{x\in C_n} x...
Gaussian elimination is a product of linear algebra. The elimination method is justified because it does not change the linear dependence/independence of the given system - in other words, the system remains equivalent to the initial system after every step in the elimination process. Gaussian...
Why is it a problem that eigenvalues are not real? A fundamental matrix of the system is a matrix-valued function ##\Phi## such that the columns of ##\Phi(t)## are linearly independent solutions of the given system for all ##t##. Simply write how ##\Phi(t)## is parametrised with respect to ##t##...
Shooting from hip, I'd say choice is not necessary for this to occur. To prove choice, we would have an arbitrary family of nonempty sets and we must find a choice function. The condition (**) is formulated in only finite terms. I don't see how it provides an angle to tackle with infinite...
Second derivative positive implies ##f## is convex for ##x<a##. Take anything that is convex and decreasing for (some) ##x<a## as counterexample to your claim of ##f'(x)>0##.
As for the initial claim, think again of convexity. The slope at ##x=a## must be as large as it gets as ##x\to a-##...
As you say, an ##m\times n## matrix over ##\mathbb R##, say, represents a linear map ##\mathbb R^n \to\mathbb R^m##. Hence, a row would correspond to a linear map ##\mathbb R^n\to\mathbb R##. A projection, for instance.
Summation is a binary operation, hence you can extend it to only finite sums.
This ## \sum _{k=0}^\infty c_k x^k ## is not a polynomial.
The sum of two polynomials is a polynomial.
That is indeed how vector spaces are defined. There is a generating set (of arbitrary cardinality) and the space...
It is not wrong to check the claim up to ##m## manually. Sometimes that's even helpful. The induction hypothesis then is that the claim is true
for some ##n\geqslant m## (weak induction);
for all ##k\leqslant n##, where ##n\geqslant m## (strong induction).
In both cases the task is to prove...
You can formulate more precisely. Instead of saying "consider ##n=m## case", we can say "assume that ## B^nx = \alpha ^nx ## for some ##n\geqslant 2##". Then for the case ##n+1## we have the equalities
B^{n+1}x = BB^nx = B\alpha ^nx = \alpha ^nBx = \alpha ^n\alpha x = \alpha ^{n+1}x.
Right now...
The determinant of an upper or lower triangular matrix is equal to the product of the elements on the leading diagonal.
An upper triangular matrix is a square matrix whose entries below the leading diagonal are zero.
The claim follows quickly provided you are familiar with the Laplace...
It's a power series at the point ##a=9## whose radius of convergence ##R## is given by
\frac{1}{R} = \limsup _n \frac{1}{\sqrt[n]{n9^n}} = \frac{1}{9}.
Hence, interval of convergence contains ##(0,18)##. For ##x=18## we get ## \sum \frac{(-1)^n}{n} ##, which converges. For ##x=0## we get...