1-norm is larger than the Euclidean norm

In summary, the 1-norm is larger than the Euclidean norm. To show this, we define the Euclidean norm as the square root of the sum of squares of each component of a vector, and the 1-norm as the sum of the absolute values of each component. To prove that the 1-norm is larger, we can show that the square of the Euclidean norm is less than or equal to the square of the 1-norm, and then use induction to show that the sum of squares of finitely many numbers is not larger than the square of the sum of the absolute values of the same numbers. This can be demonstrated by considering the case of two variables, where the unit ball in the
  • #1
Dr. Seafood
121
0
"1-norm" is larger than the Euclidean norm

Define, for each [itex]\vec{x} = (x_1, \ldots, x_n) \in \mathbb{R}^n[/itex], the (usual) Euclidean norm [itex]\Vert{\vec{x}}\Vert = \sqrt{\sum_{j = 1}^n x_j^2}[/itex] and the 1-norm [itex]\Vert{\vec{x}}\Vert_1 = {\sum_{j = 1}^n |x_j|}[/itex].

How can we show that, for all [itex]\vec{x} \in \mathbb{R}^n[/itex], we have [itex]\Vert{\vec{x}}\Vert \leq \Vert{\vec{x}}\Vert_1[/itex]?

I'm thinking of writing [itex]\Vert{\vec{x}}\Vert^2 \leq \Vert{\vec{x}}\Vert_1^2[/itex] and then showing (probably inductively) that the sum of squares of (finitely) many numbers is not larger than the square of the sum of the absolute values of the same numbers; i.e. show [itex]{\sum_{j = 1}^n x_j^2} \leq (\sum_{j = 1}^n |x_j|)^2[/itex] by induction on n. For n = 1 and 2 this is simple enough. The inductive step is tricky, but I feel like using an induction argument is totally overdoing it.

I ask this because I read that this is trivial, but I don't see it immediately. Do you?
 
Physics news on Phys.org
  • #2


If you try writing out what
[tex]\left( \sum_{j = 1}^n |x_j| \right)^2 = \left( \sum_{i = 1}^n |x_i| \right) \cdot \left( \sum_{j = 1}^n |x_j| \right)[/tex]
really means, you will find that it is equal to
[tex]\sum_{j = 1}^n x_j^2 + A[/tex]
where you can quite easily show (from the explicit expression for A) that A > 0.
 
  • #3


Dr. Seafood said:
Define, for each [itex]\vec{x} = (x_1, \ldots, x_n) \in \mathbb{R}^n[/itex], the (usual) Euclidean norm [itex]\Vert{\vec{x}}\Vert = \sqrt{\sum_{j = 1}^n x_j^2}[/itex] and the 1-norm [itex]\Vert{\vec{x}}\Vert_1 = {\sum_{j = 1}^n |x_j|}[/itex].

How can we show that, for all [itex]\vec{x} \in \mathbb{R}^n[/itex], we have [itex]\Vert{\vec{x}}\Vert \leq \Vert{\vec{x}}\Vert_1[/itex]?

I'm thinking of writing [itex]\Vert{\vec{x}}\Vert^2 \leq \Vert{\vec{x}}\Vert_1^2[/itex] and then showing (probably inductively) that the sum of squares of (finitely) many numbers is not larger than the square of the sum of the absolute values of the same numbers; i.e. show [itex]{\sum_{j = 1}^n x_j^2} \leq (\sum_{j = 1}^n |x_j|)^2[/itex] by induction on n. For n = 1 and 2 this is simple enough. The inductive step is tricky, but I feel like using an induction argument is totally overdoing it.

I ask this because I read that this is trivial, but I don't see it immediately. Do you?

work out the case for two variables first. Here you can see that the unit ball in the 1-norm is contained completely inside the unit ball for the 2 norm. Algebraically if x + y =1 then

x^2 + y^2 +2xy = 1 so the 2 norm is less than the 1 norm.
 

Related to 1-norm is larger than the Euclidean norm

1. What is 1-norm and Euclidean norm?

1-norm and Euclidean norm are two different ways of measuring the size or magnitude of a vector in mathematics. 1-norm is also known as Manhattan distance or taxicab norm, while Euclidean norm is also known as the straight-line distance or L2 norm.

2. Why is 1-norm larger than Euclidean norm?

1-norm is larger than Euclidean norm because it measures the distance between two points by adding up the absolute differences between their coordinates, while Euclidean norm measures the distance as the square root of the sum of the squared differences between the coordinates.

3. What are the applications of 1-norm and Euclidean norm?

1-norm and Euclidean norm have various applications in mathematics, computer science, and engineering. They are commonly used in data analysis, machine learning, optimization problems, and signal processing to name a few.

4. How do you calculate 1-norm and Euclidean norm?

To calculate 1-norm, you take the sum of the absolute values of all the elements in a vector. For example, if the vector is [2, -3, 5], the 1-norm would be |2| + |-3| + |5| = 10.

To calculate Euclidean norm, you take the square root of the sum of the squared values of all the elements in a vector. Using the same example as above, the Euclidean norm would be √(2² + (-3)² + 5²) = √38 ≈ 6.164.

5. Which norm is better to use in a specific situation?

The choice between 1-norm and Euclidean norm depends on the specific situation and the problem at hand. 1-norm is useful when dealing with sparse data or when you want to focus on the differences between individual elements. Euclidean norm, on the other hand, is better for dense data and when you want to measure the overall magnitude of a vector. It is important to understand the problem and the data to determine which norm is more appropriate to use.

Similar threads

Replies
1
Views
2K
Replies
2
Views
1K
Replies
5
Views
484
Replies
0
Views
440
Replies
1
Views
1K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
1
Views
580
Replies
16
Views
3K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
Back
Top