Math Challenge - April 2021

In summary, the conversation discussed the solution for Kepler's law and showed that the period of a planet's orbit can be expressed as T(a) = (pi*a^(3/2))/gamma, where a is the length of the semi-major axis and gamma is a positive constant. The equation was also solved for all values of a and it was determined that the solution is unique due to the constraint given.
  • #36
probably wrong for #8 but I'll try anyway; by gram schmidt we can come up with an orthogonal basis ##(v_1, v_2)## for ##K## by setting ##v_1 = 1## and ##v_2 = x - \langle x, 1 \rangle = x - 1/2##. then it's just
$$\pi^{\bot}(v) = \frac{\langle v, x - \frac{1}{2} \rangle}{|x - \frac{1}{2}|^2} (x - \frac{1}{2}) + \langle v , 1 \rangle$$then$$\begin{align*}
\int_0^1 (x-\frac{1}{2}) e^x dx &= \frac{3-e}{2} \\

\int_0^1 (x-\frac{1}{2})^2 dx &= \frac{1}{12} \\

\int_0^1 e^x dx = e-1
\end{align*}$$so you just get$$\pi^{\bot}(e^x) = 6(3-e)(x-\frac{1}{2}) + (e-1)$$lol idk if that's right, but it's 3am so cba to check atm :smile:
 
Physics news on Phys.org
  • #37
etotheipi said:
probably wrong for #8 but I'll try anyway; by gram schmidt we can come up with an orthogonal basis ##(v_1, v_2)## for ##K## by setting ##v_1 = 1## and ##v_2 = x - \langle x, 1 \rangle = x - 1/2##. then it's just
$$\pi^{\bot}(v) = \frac{\langle v, x - \frac{1}{2} \rangle}{|x - \frac{1}{2}|^2} (x - \frac{1}{2}) + \langle v , 1 \rangle$$then$$\begin{align*}
\int_0^1 (x-\frac{1}{2}) e^x dx &= \frac{3-e}{2} \\

\int_0^1 (x-\frac{1}{2})^2 dx &= \frac{1}{12} \\

\int_0^1 e^x dx = e-1
\end{align*}$$so you just get$$\pi^{\bot}(e^x) = 6(3-e)(x-\frac{1}{2}) + (e-1)$$lol idk if that's right, but it's 3am so cba to check atm :smile:
It is correct. And here are the sorted results:
$$
\pi^\perp(v) = \langle v,1 \rangle 1 + 12 \;\langle v,x-\dfrac{1}{2}\rangle \left(x-\dfrac{1}{2}\right)
$$
$$
\pi^\perp(e^x)=6x(3-e) +4e-10
$$
 
  • Like
Likes etotheipi
  • #38
Another way of doing 7 b:

\begin{align*}
\| A x \|_2^2 &= x^T A^T A x
\end{align*}

Define ##M = A^T A##. Obviously, ##M^T = M##. As ##M## is real and symmetric there exist the matrix ##R## who's columns are eigenvectors of ##M## and such that ##R^T R = \mathbb{1}## and

\begin{align*}
M = R D R^T , \quad
D=
\begin{pmatrix}
\lambda_1 & & & \\
& \lambda_2 & & 0 \\
0 & & \ddots & \\
& & & \lambda_d
\end{pmatrix}
\end{align*}

where ##\lambda_i## are the eigenvalues. We make the following change of variables with unit Jacobian:

\begin{align*}
x' = R^T x \quad (\det R^T = 1)
\end{align*}

in

\begin{align*}
\int \prod_{i=1}^d dx_i \exp ( -x^T M x) &= \int \prod_{i=1}^d dx_i' \exp ( -x^{'T} D x')
\\
&= (\pi)^{d/2} / (\prod_{i=1}^d \lambda_i)^{1/2}
\\
&= (\pi)^{d/2} / (\det M)^{1/2}
\\
&= (\pi)^{d/2} / \det A .
\end{align*}
 
  • Like
Likes graphking
  • #39
fresh_42 said:
6. Let ## f\in L^2 ( \mathbb{R} ) ## and ## g : \mathbb{R} \longrightarrow \overline{\mathbb{R}} ## be given as
$$
g(t):=t\int_\mathbb{R} \chi_{[t,\infty )}(|x|)\exp(-t^2(|x|+1))f(x)\,dx
$$
Show that ##g\in L^1(\mathbb{R}).##

Reference for this definition is Papa Rudin pg. 65:

Definition: If ##0<p<\inf## and if ##f## is a complex measurable function on the measure space ##X##, define

$$\| f \|_{p}:=\left\{\int_{X} |f|^p\, d\mu\right\}^{\tfrac{1}{p}}$$

and let ##L^p( \mu )## consist of all ##f## for which ##\| f\|_{p}<\infty##.

Work:
$$\begin{align} \| g\|_{1} & =\int_{\mathbb{R}}\left| t \int_{\mathbb{R}}\chi_{\left[ t,\infty \right)}(|x|) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & =
\begin{cases}
\int_{\mathbb{R}}\left| t \left( \int_{-\infty}^{-t}+\int_{t}^{\infty}\right) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t \geq 0 \\
\int_{\mathbb{R}}\left| t \int_{-\infty}^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t < 0
\end{cases} \\ & = 4\int_{0}^{\infty} t\left| \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & \leq 4\int_{0}^{\infty} t \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1))\left| f(x)\right| \, dx \, dt \\ & \leq \underbrace{4\int_{0}^{\infty} t e^{-t^2}\, dt}_{=2}\cdot \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|\, dx \\ & \leq 2\left\{\left[ \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|^2\, dx\right]^{\tfrac{1}{2}}\right\} ^{2} < \infty \\ \end{align}$$

where the last inequality (finiteness) follow from the hypothesis that ##f\in L^{2}(\mathbb{R} )## and this was to be shown.
 
  • Like
Likes Office_Shredder
  • #40
benorin said:
Reference for this definition is Papa Rudin pg. 65:

Definition: If ##0<p<\inf## and if ##f## is a complex measurable function on the measure space ##X##, define

$$\| f \|_{p}:=\left\{\int_{X} |f|^p\, d\mu\right\}^{\tfrac{1}{p}}$$

and let ##L^p( \mu )## consist of all ##f## for which ##\| f\|_{p}<\infty##.

Work:
$$\begin{align} \| g\|_{1} & =\int_{\mathbb{R}}\left| t \int_{\mathbb{R}}\chi_{\left[ t,\infty \right)}(|x|) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & =
\begin{cases}
\int_{\mathbb{R}}\left| t \left( \int_{-\infty}^{-t}+\int_{t}^{\infty}\right) \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t \geq 0 \\
\int_{\mathbb{R}}\left| t \int_{-\infty}^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt & \text{if } t < 0
\end{cases} \\ & = 4\int_{0}^{\infty} t\left| \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1)) f(x)\, dx \right| \, dt \\ & \leq 4\int_{0}^{\infty} t \int_{\max\left\{ t,0 \right\} }^{\infty} \exp (-t^2(|x|+1))\left| f(x)\right| \, dx \, dt \\ & \leq \underbrace{4\int_{0}^{\infty} t e^{-t^2}\, dt}_{=2}\cdot \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|\, dx \\ & \leq 2\left\{\left[ \int_{\max\left\{ t,0 \right\} }^{\infty} |f(x)|^2\, dx\right]^{\tfrac{1}{2}}\right\} ^{2} < \infty \\ \end{align}$$

where the last inequality (finiteness) follow from the hypothesis that ##f\in L^{2}(\mathbb{R} )## and this was to be shown.
The last inequality's name is Hölder.
 
  • #41
Don't the outermost square and square root cancel, and then it looks a lot to me like the second to last step is just asserting that
$$\int_{\max(t,0)}^\infty |f(x)| dx \leq \int_{\max(t,0)}^{\infty} |f(x)|^2 dx$$

I must be missing something because I don't think that's true.
 
  • #42
Office_Shredder said:
Don't the outermost square and square root cancel, and then it looks a lot to me like the second to last step is just asserting that
$$\int_{\max(t,0)}^\infty |f(x)| dx \leq \int_{\max(t,0)}^{\infty} |f(x)|^2 dx$$

I must be missing something because I don't think that's true.
If we do not separate the factors, then we have Hölder (##1=1/2 +1/2##)
$$
\int_\mathbb{R} |u(x)f(x)|\,dx =\|uf\|_1\leq \|u\|_2\|f\|_2 < \infty
$$
where ##u## is the exponential factor.
 
  • #43
julian said:
Another way of doing 7 b:

\begin{align*}
\| A x \|_2^2 &= x^T A^T A x
\end{align*}

Define ##M = A^T A##. Obviously, ##M^T = M##. As ##M## is real and symmetric there exist the matrix ##R## who's columns are eigenvectors of ##M## and such that ##R^T R = \mathbb{1}## and

\begin{align*}
M = R D R^T , \quad
D=
\begin{pmatrix}
\lambda_1 & & & \\
& \lambda_2 & & 0 \\
0 & & \ddots & \\
& & & \lambda_d
\end{pmatrix}
\end{align*}

where ##\lambda_i## are the eigenvalues. We make the following change of variables with unit Jacobian:

\begin{align*}
x' = R^T x \quad (\det R^T = 1)
\end{align*}

in

\begin{align*}
\int \prod_{i=1}^d dx_i \exp ( -x^T M x) &= \int \prod_{i=1}^d dx_i' \exp ( -x^{'T} D x')
\\
&= (\pi)^{d/2} / (\prod_{i=1}^d \lambda_i)^{1/2}
\\
&= (\pi)^{d/2} / (\det M)^{1/2}
\\
&= (\pi)^{d/2} / \det A .
\end{align*}
Here's my proof again. I should mention that in my proof I used Sylveter's criterion. In particular the theorem:

"A real-symmetric matrix ##M## has non-negative eigenvalues if and only if ##M## can be factored as ## M = A^TA##, and all eigenvalues are positive if and only if ##A## is non-singular".

Also, I think the answer is supposed to be ##\pi^{d/2} / |\det A|##.
 
Last edited:
  • #44
julian said:
I should mention that in my proof I used Sylveter's criterion. In particular the theorem:

"A real-symmetric matrix ##M## has non-negative eigenvalues if and only if ##M## can be factored as ## M = A^TA##, and all eigenvalues are positive if and only if ##A## is non-singular".

Also, I think the answer is supoosed to be ##\pi^{d/2} / |\det A|##.
The shortest version is probably by the transformation theorem for integrals. ##\varphi (x)=Ax## is ##C^1## and ##D\varphi =A.##
 
  • #45
fresh_42 said:
If we do not separate the factors, then we have Hölder (##1=1/2 +1/2##)
$$
\int_\mathbb{R} |u(x)f(x)|\,dx =\|uf\|_1\leq \|u\|_2\|f\|_2 < \infty
$$
where ##u## is the exponential factor.
Look at you giving mana from the prof: I had no idea this was so useful here... ;)

@Office_Shredder Yes I indeed reasoned like you had and must have assumed ##|f(x)|\geq 1## in my head lol
 
  • #46
I should have just said, that in my proof of 7 b, I used: Given that ##M = A^T A##, the eigenvalues of ##M## are non-negative because

\begin{align*}
\lambda = \frac{v^T M v}{v^T v} = \frac{v^T A^T A v}{v^T v} = \frac{\| A v \|_2^2}{\| v \|_2} \geq 0 .
\end{align*}

And the eigenvalues must be non-zero because ##M## is non-singular if ##A## is non-singular. That would have been more helpful to people. This is the reverse implication of the theorem:

"A real-symmetric matrix ##M## has non-negative eigenvalues if and only if ##M## can be factored as ## M = A^TA##, and all eigenvalues are positive if and only if ##A## is non-singular"

And, at the very end of my calculation (post #38) I should have written down ##\pi^{d/2} / |\det A|## because ##(\det M)^{1/2}## is a positive number.
 
  • #47
fresh_42 said:
The shortest version is probably by the transformation theorem for integrals. ##\varphi (x)=Ax## is ##C^1## and ##D\varphi =A.##

So you are taking a vector valued function ##\varphi (x)## and considering the derivative of it. So in component form ##(D \varphi)_{ij} = \partial_j \varphi_i (x) = A_{ij}##. Is that right? Where do you go from there?
 
Last edited:
  • #50
fresh_42 said:
13. Write
A doubt, possibly silly, about question 13. Are and required to be integers?
 
  • #51
Not anonymous said:
A doubt, possibly silly, about question 13. Are and required to be integers?
Well, it requires only inductions and quadratic equations, so it is technically not difficult. But it is quite tricky, I admit. 12, 14, and 15 are probably easier. 11 is easy with the fundamental theorem of algebra and determinants, so it is technically difficult, but not very tricky.
 
  • #52
fresh_42 said:
Well, it requires only inductions and quadratic equations, so it is technically not difficult. But it is quite tricky, I admit. 12, 14, and 15 are probably easier. 11 is easy with the fundamental theorem of algebra and determinants, so it is technically difficult, but not very tricky.
Sorry again, my question was incomplete due to copy-paste quote error. Meant to ask whether ##a, b, c, d## in the solution for question 13 should be integers.

For question 11, are ##a(x)## and ##b(x)## too required to be polynomials, or just some real-valued functions? I am not familiar with the notation ##\mathbb{R}[x]##. Searching on the web, I see that one forum defines ##\mathbb{R}[x]## as polynomial functions, but I would like to know if that is a standard convention used here in this math challenge too.

And thanks for those lightning quick replies!
 
  • #53
Not anonymous said:
Sorry again, my question was incomplete due to copy-paste quote error. Meant to ask whether ##a, b, c, d## in the solution for question 13 should be integers.

Yes. The problem would otherwise be trivial.

Not anonymous said:
For question 11, are ##a(x)## and ##b(x)## too required to be polynomials, or just some real-valued functions? I am not familiar with the notation ##\mathbb{R}[x]##. Searching on the web, I see that one forum defines ##\mathbb{R}[x]## as polynomial functions, but I would like to know if that is a standard convention used here in this math challenge too.

And thanks for those lightning quick replies!
Yes, ##\mathbb{R}[x]## stands for all polynomials with real coefficients.
##\mathbb{R}(x)## would be quotients of polynomials with real coefficients, and ##\mathbb{R}[[x]]## are all formal power series with real coefficients, i.e. polynomials with an infinite degree.
 
  • #54
Let ##x = \sqrt[8] {2207 - \dfrac {1} {2207 - \dfrac {1} {2207 - \dfrac {1} {2207 - ...}}}}##. We note that ##x^8 = 2207 - \dfrac {1} {2207 - \dfrac {1} {2207 - \dfrac {1} {2207 - ...}}}## and this is equivalent to ##x^8 = 2207 - \dfrac {1} {x^8} \Rightarrow x^8 + \dfrac {1} {x^8} = 2207## (Equation 1).

Next, we note that ##\left(y + \dfrac {1} {y}\right)^2 = y^2 + \dfrac {1} {y^2} + 2##. Applying this identity to equation 1 and using the fact that ##x^8 = (x^4)^2##, we get ##\left(x^4 + \dfrac {1} {x^4}\right)^2 = 2209 \Rightarrow \left(x^4 + \dfrac {1} {x^4}\right) = \pm 47##. Applying the same identity repeatedly, while considering only non-negative values for RHS (i.e. non-negative square root) in order to get real-valued solutions, we get ##\left(x^2 + \dfrac {1} {x^2}\right) = \sqrt {47 + 2} = 7##, and therefore ##\left(x + \dfrac {1} {x}\right) = \sqrt {7 + 2} = 3##. This is equivalent to the quadratic equation ##x^2 - 3x + 1 = 0##, solving which we get ##x = \dfrac {3 \pm \sqrt {5}} {2}##. Hence one solution of the form ##\dfrac {a + b\sqrt{c}} {d}## to the given ##\sqrt[8] {2207 - ...}## expression is with ##a = 3, b = 1, c = 5, d = 2##
 
  • Like
Likes fresh_42
  • #55
The positions of the first square in the 8 rows are ##1,2,...,8##. For any given placement of 8 rooks such that none of those threaten another, the following 2 conditions are necessary and sufficient: (a) no two rooks must be in same row, (b) no two rooks must be in same column. Since there must be exactly one rook per row, we may denote the positions of 8 rooks by ##pos_{i}## where ##i## denotes the row and ##pos_{i}## is the number assigned to the square where the rook in that row is placed. Next we note that each position can be written as sum of number assigned to first square in that row and distance or offset of the rook from that first square, i.e. ##pos_{i} = firstpos_{i} + offset{i}##. Here ##firstpos_{i}## would be ##1, 2, .., 8## respectively for the 8 rows in order and ##offset_{i} \in \{0, 1, .., 7\}##. Now, since no two rooks can occupy the same column, we cannot have ##offset_{i} = offset_{j}## for any ##i, j \in \{1, 2, ...,8\}, i \neq j##. Thus for any valid placement of 8 rooks, the offset values of the 8 rooks will be distinct and therefore since only 8 different possible values exist for offset values, they will cover the set ##\{0, 1, 2, ...,7\}##, hence they will always sum to ##0 + 1 + .. + 7 = 28##. Therefore, ##S = \sum_{i=1}^8 pos_{i} = \sum_{i=1}^8 (firstpos_{i} + offset_{i}) =##

##\sum_{i=1}^8 firstpos_{i} + \sum_{i=1}^8 offset_{i} = (1 + 2 + .. + 8) + (0 + 1 + .. + 7) = 36 + 28 = 64##.

Any valid placement of rooks will always sum to this same value, 64.
 
  • #56
Not anonymous said:
Any valid placement of rooks will always sum to this same value, 64.
If I place one rook on ##A1=1## and another on ##H8=64## I already get ##65## with only ##2## rooks.
 
  • Informative
Likes Not anonymous
  • #57
Think you just missed out a factor of 8 in your working, i.e. if ##K## is the set of positions ##(i,j)## with ##i,j \in \{ 1,\dots, 8 \}## then you ought to have ##S = \sum_{(i,j) \in K} [i + 8(j-1)] = \sum_1^8 i + 8 \sum_1^7 j##
 
  • Informative
Likes Not anonymous
  • #58
fresh_42 said:
If I place one rook on ##A1=1## and another on ##H8=64## I already get ##65## with only ##2## rooks.
Right, my answer was wrong. I was suspicious when I got the sum 64 as the answer since it seemed too small to be correct, but was half asleep and submitted the solution without cross-checking it by writing down on paper and summing up the example placement I had in mind. Below is a corrected version, hopefully correct this time.

The positions of the first square in the 8 rows are ##1,2,...,8##. For any given placement of 8 rooks such that none of those threaten another, the following 2 conditions are necessary and sufficient: (a) no two rooks must be in same row, (b) no two rooks must be in same column. Since there must be exactly one rook per row, we may denote the positions of 8 rooks by ##pos_{i}## where ##i## denotes the row and ##pos_{i}## is the number assigned to the square where the rook in that row is placed. Next we note that each position can be written as sum of number assigned to first square in that row and distance or offset of the rook from that first square, i.e. ##pos_{i} = firstpos_{i} + offset_{i}##. Here ##firstpos_{i}## would be ##\{1, 9, 17, 26, .., 57\}## (numbers spaced apart by a difference of 8) respectively for the 8 rows in order and ##offset_{i} \in \{0, 1, .., 7\}##. Now, since no two rooks can occupy the same column, we cannot have ##offset_{i} = offset_{j}## for any ##i, j \in \{1, 2, ...,8\}, i \neq j##. Thus for any valid placement of 8 rooks, the offset values of the 8 rooks will be distinct and therefore since only 8 different possible values exist for offset values, they will cover the set ##\{0, 1, 2, ...,7\}##, hence they will always sum to ##0 + 1 + .. + 7 = 28##. Therefore, ##S = \sum_{i=1}^8 pos_{i} = \sum_{i=1}^8 (firstpos_{i} + offset_{i}) =##

##\sum_{i=1}^8 firstpos_{i} + \sum_{i=1}^8 offset_{i} = (1 + 9 + 17 + .. + 57) + (0 + 1 + .. + 7) = 232 + 28 = 260##.

Any valid placement of rooks will always sum to this common value, 260.

Verified with the following two placements:
##\{A1, B2, C3, D4, E5, F6, G7, H8\} \rightarrow sum = (1 + 10 + 19 + 28 + 37 + 46 + 55 + 64) = 260##
##\{A1, B2, C3, D4, E5, F8, G6, H7\} \rightarrow sum = (1 + 10 + 19 + 28 + 37 + 48 + 54 + 63) = 260##
 
  • Like
Likes fresh_42
  • #59
etotheipi said:
Think you just missed out a factor of 8 in your working, i.e. if ##K## is the set of positions ##(i,j)## with ##i,j \in \{ 1,\dots, 8 \}## then you ought to have ##S = \sum_{(i,j) \in K} [i + 8(j-1)] = \sum_1^8 i + 8 \sum_1^7 j##
Thanks! Yes, in the summation, I had mistakenly used as the start positions of rows the values ##\{1, 2, .., 8\}## instead of ##\{1, 9, 17, ..., 57\}##, and that is equivalent to missing out a factor of 8 as you have mentioned
 
  • Like
Likes etotheipi
  • #60
I thought I'd try again to drum up some interest in question #4. This subject is nice in that it mixes algebra and geometry and thus allows the two to enrich and illuminate each other. For example, this question is in the direction of showing the relation between factorization in algebra and decomposition in geometry. If you are at all interested, you might also try the slightly less abstract but more difficult problem that if f and g are polynomials in C[X,Y], where C is the complex numbers, and f is irreducible, then V(g) contains V(f) if and only if f divides g, (where V(f) is the set of zeroes of f in C^2, and V(g) is the set of zeroes of g.) In particular, if for all points p in the plane, f(p) = 0 implies also g(p) = 0, and f is irreducible, then f divides g. And give an example to show the irreducibility of f is needed for this statement. If f,g,...,h are distinct and irreducible, it follows that I(V(f^r.g^s...h^t)) = (f.g...h) = the principal ideal generated by the product of the distinct irreducible factors. Also V(f^r.g^s...h^t) = V(f) union V(g) ...union V(h), where each of V(f), V(g),...,V(h) is an irreducible variety.
 
Last edited:
  • #61
Here's my attempt at #4. I feel like there's a good chance that I am just getting confused by a definition though, or I'm assuming something is obvious that is not.

I'm going to use ##x## to denote ##x_1,...,x_n##m

let's start with if ##Y## is irreducible then ##I(Y)## is prime. If it's not prime, then there exists ##f##, ##g## polynomials such that ##fg\in I(Y)## but neither of ##f## or ##g## are. This means ##fg=0## exactly on ##Y##. If ##f(x)g(x)=0## then either ##f(x)=0## or ##g(x)=0##. Let ##Y_1=V(f)## be the set of points where ##f=0##, and ##Y_2=V(g)## be the set of points where ##g=0##. Then ##Y=Y_1 \cup Y_2##. Since neither of ##f## or ##g## are in ##I(Y)##, this means neither of ##Y_1## or ##Y_2## are equal to ##Y##, a contradiction to it being irreducible.

Now suppose ##I(Y)## is prime. Suppose ##Y=Y_1\cup Y_2## for two smaller varieties. Then ##I(Y) \subset I(Y_1)## and ##I(Y) \subset I(Y_2)## are strict inclusions . Then Let ##f\in I(Y_1)## and ##g\in I(Y_2)##. Then for any ##x\in Y##, either ##x\in Y_1## and ##f(x)=0##, or ##x\in Y_2## and ##g(x)=0##. Hence ##fg=0## for all ##x\in Y##, which means ##fg\in I(Y)##. But this is a contradiction to ##I(Y)## being prime, hence ##Y## must be irreducible.
 
  • #62
Office_Shredder said:
Then ##Y=Y_1 \cup Y_2##.

I don't think this holds. The union ##Y_1\cup Y_2## has codimension ##1## but ##Y## could be much smaller.
 
  • #63
Office_Shredder said:
Here's my attempt at #4. I feel like there's a good chance that I am just getting confused by a definition though, or I'm assuming something is obvious that is not.

I'm going to use ##x## to denote ##x_1,...,x_n##m

let's start with if ##Y## is irreducible then ##I(Y)## is prime. If it's not prime, then there exists ##f##, ##g## polynomials such that ##fg\in I(Y)## but neither of ##f## or ##g## are. This means ##fg=0## exactly on ##Y##. If ##f(x)g(x)=0## then either ##f(x)=0## or ##g(x)=0##. Let ##Y_1=V(f)## be the set of points where ##f=0##, and ##Y_2=V(g)## be the set of points where ##g=0##. Then ##Y=Y_1 \cup Y_2##. Since neither of ##f## or ##g## are in ##I(Y)##, this means neither of ##Y_1## or ##Y_2## are equal to ##Y##, a contradiction to it being irreducible.

Now suppose ##I(Y)## is prime. Suppose ##Y=Y_1\cup Y_2## for two smaller varieties. Then ##I(Y) \subset I(Y_1)## and ##I(Y) \subset I(Y_2)## are strict inclusions . Then Let ##f\in I(Y_1)## and ##g\in I(Y_2)##. Then for any ##x\in Y##, either ##x\in Y_1## and ##f(x)=0##, or ##x\in Y_2## and ##g(x)=0##. Hence ##fg=0## for all ##x\in Y##, which means ##fg\in I(Y)##. But this is a contradiction to ##I(Y)## being prime, hence ##Y## must be irreducible.
After a night of sleep and a second view ##^*)## ... will say a look at the rules
$$
V(I(Y))=Y \, , \,Y_1\subsetneq Y \Longleftrightarrow I(Y_1)\supsetneq I(Y)\, , \,\text{ etc.}
$$

Your proof is correct.

_________
##^*)## I have been misguided by a lecture note I found on the internet that claimed that Hilbert's Nullstellensatz would be required. It is not. I should have taken my textbook in the first place! I write this as a reminder for all readers: reliability is a fragile good these days. We have in general (a.a.):
textbook > nLab > lecture note > Wikipedia > rest of the internet.
 
Last edited:
  • #64
Cool. I was going to post something about this last night, but decided to sleep on it and rethink what the definition of a variety actually meant.

I think showing that a finite union of varieties is another variety probably requires some non trivial algebra.
 
  • #65
I think my objection is still valid. If ##Y## is a union of two lines in ##\mathbb{C}^3##, then ##Y## is certainly not a union ##V(f)\cup V(g).##

Office_Shredder said:
I think showing that a finite union of varieties is another variety probably requires some non trivial algebra.

I think this is easy: if ##X=V(f_1,\ldots,f_n)## and ##Y=V(g_1,\ldots,g_m)##, then ##X\cup Y=V(f_1g_1,f_2g_1,\ldots,f_ng_m).##
 
  • #66
Infrared said:
I think my objection is still valid. If ##Y## is a union of two lines in ##\mathbb{C}^3##, then ##Y## is certainly not a union ##V(f)\cup V(g).##
I think this is easy: if ##X=V(f_1,\ldots,f_n)## and ##Y=V(g_1,\ldots,g_m)##, then ##X\cup Y=V(f_1g_1,f_2g_1,\ldots,f_ng_m).##
Which direction do you mean? The proof as written is a bit messy. I think the correct line should be
$$
Y=(V(f)\cap Y)\cup (V(g)\cap Y).
$$
 
  • #67
To show the union of two varieties is a variety , the needed algebra seems to be that if I and J are ideals, and f is in I, and g is in J, then the product fg is in the intersection of the ideals I and J.
 
  • #68
I understand the problem now. In the first part, ##fg\in Y## does not guarantee that ##fg=0## exactly on ##Y##, it could be on a larger set. So we need that ##Y_1\cap Y## is a variety in order to write it as ##Y=(Y_1\cap Y)\cup (Y_2\cap Y)##. And maybe intersecting varieties is the hard direction, I will think about it.
 
  • #69
Office_Shredder said:
I understand the problem now. In the first part, ##fg\in Y## does not guarantee that ##fg=0## exactly on ##Y##, it could be on a larger set. So we need that ##Y_1\cap Y## is a variety in order to write it as ##Y=(Y_1\cap Y)\cup (Y_2\cap Y)##. And maybe intersecting varieties is the hard direction, I will think about it.
You have also a few indirect arguments too many, I think, which makes it hard to read. We only need:
1.) ##Y## irreducible and ##fg\in I(Y).##
2.) ##J=\sqrt{J}=:I(Y)## prime and ##V(J)=Y_1\cup Y_2.##
The second part is a bit more difficult than the first.
 
  • #70
The fact that a variety is irreducible if and only if its ideal is prime, i.e. problem #4, is "elementary"; i.e. although abstract and confusing, it follows directly from the definitions. The deeper result, requiring the nullsetellensatz, is the fact that every prime ideal arises as the ideal of some variety, i.e. that I(V(J)) = J, if J is prime. The "less abstract but more difficult" problem I posed in post #60 is a version of the nullstellensatz. Thus some authors may assert (correctly if imprecisely) that the nullstellensatz is required to establish a (one-one) correspondence between irreducible varieties and prime ideals. Problem #4 only establishes an injection from the set of irreducible varieties to the set of prime ideals. I find all this stuff confusing myself, even after years of thinking about it.
 
  • Like
Likes fresh_42

Similar threads

  • Math Proof Training and Practice
2
Replies
61
Views
7K
  • Math Proof Training and Practice
2
Replies
42
Views
6K
  • Math Proof Training and Practice
3
Replies
93
Views
10K
  • Math Proof Training and Practice
3
Replies
100
Views
7K
  • Math Proof Training and Practice
3
Replies
86
Views
9K
  • Math Proof Training and Practice
2
Replies
56
Views
7K
  • Math Proof Training and Practice
4
Replies
114
Views
6K
  • Math Proof Training and Practice
3
Replies
80
Views
4K
  • Math Proof Training and Practice
2
Replies
67
Views
8K
  • Math Proof Training and Practice
2
Replies
60
Views
8K
Back
Top