I have no idea what 'Normalize'/'Normalization' means Help please?

In summary: Summary: In summary, the conversation discusses the confusion and difficulty of a Quantum Computing course for someone with no prior experience in Linear Algebra or Probability. The concept of normalizing a quantum state is explained, and the importance of normalization in probability is emphasized. The mathematical process for determining the normalization factor is discussed, and the idea that normalization is assumed in every quantum state is clarified. The conversation ends with a discussion of the difficulties of the course and the textbook being used.
  • #1
jumi
28
0
Just started a Quantum Computing course, and I'm getting continuously confused and lost in the class. I don't think the class is "out of my league" as I've been able to ace every physics class I've taken so far, so I don't want to drop the class.

That said, I'm getting confused at the most basic concepts in the course. I have no idea what normalizing a quantum state means or how to normalize something. It seems like the professor is just pulling variables out of thin air.

For example, say we have 2 qubits like this:
[itex]\left|ψ\right\rangle[/itex] = [itex]\alpha_{00}[/itex][itex]\left|00\right\rangle[/itex] + [itex]\alpha_{01}[/itex][itex]\left|01\right\rangle[/itex] + [itex]\alpha_{10}[/itex][itex]\left|10\right\rangle[/itex] + [itex]\alpha_{11}[/itex][itex]\left|11\right\rangle[/itex]

How do we determine when the 1st qubit is 0? If we DO measure the 1st qubit to be zero, how then do we determine when the 2nd qubit is also zero?

(BTW, this isn't homework. These are concepts I'm desperately trying to understand so I can TRY to do my homework...)

For what it's worth, I've never taken an actual Linear Algebra or Probability course. I've only ever been taught what I needed for whatever class needed it.

Thanks in advance.
 
Physics news on Phys.org
  • #2
To answer your main question -
Normalization, in this case, means that the sum of the probabilities for each individual state must equal 1.
(α00)^2 + (α01)^2 + (α10)^2 + (α11)^2 = 1
 
  • #3
salzrah said:
To answer your main question -
Normalization, in this case, means that the sum of the probabilities for each individual state must equal 1.
(α00)^2 + (α01)^2 + (α10)^2 + (α11)^2 = 1

Thanks for the reply.

What about normalizing after measurement?

For example, using the same 2-qubit system as before, how would we generate |ψ,after>?
 
  • #4
In regards to this question, normalizing shouldn't be thought of as a verb, but rather think of it as a checking system to make sure all your probabilities add to one.

After you measure something you collapse the system from its linear superposition. So the system will be in anyone state afterwards. Now because of this, you know that the other states are impossible because you just measured the system to have a particular state. So when you ask what is |ψ,after>, I guess you can write
|ψ,after> = 1α00|00⟩ . If 00 is the state you measure, and the other probability amplitudes will become 0. That's my two cents.
 
  • #5
salzrah said:
In regards to this question, normalizing shouldn't be thought of as a verb, but rather think of it as a checking system to make sure all your probabilities add to one.

After you measure something you collapse the system from its linear superposition. So the system will be in anyone state afterwards. Now because of this, you know that the other states are impossible because you just measured the system to have a particular state. So when you ask what is |ψ,after>, I guess you can write
|ψ,after> = 1α00|00⟩ . If 00 is the state you measure, and the other probability amplitudes will become 0. That's my two cents.

Ok, I understand that.

However, in class, we did something like this:

Say we have the same 2-qubit state we've been talking about, [itex]\left|ψ\right\rangle[/itex] = [itex]\alpha_{00}[/itex][itex]\left|00\right\rangle[/itex] + [itex]\alpha_{01}[/itex][itex]\left|01\right\rangle[/itex] + [itex]\alpha_{10}[/itex][itex]\left|10\right\rangle[/itex] + [itex]\alpha_{11}[/itex][itex]\left|11\right\rangle[/itex].

If we measure the first qubit to be zero, then the function becomes, [itex]\left|ψ\right\rangle = \frac{\alpha_{00} \left|00\right\rangle + \alpha_{01} \left|01\right\rangle}{\sqrt{\left|\alpha_{00}^{2}\right| + \left|\alpha_{01}^{2}\right|}}[/itex], where the denominator is the normalization factor.

How does one mathematically determine the normalization factor for any state?
 
  • #6
Well intuitively you can see that the denominator is equal to the square root of the probability of having the first qubit be zero.
As for the "why" is that the normalization factor, I do not know.
 
  • #7
salzrah said:
Well intuitively you can see that the denominator is equal to the square root of the probability of having the first qubit be zero.
As for the "why" is that the normalization factor, I do not know.

Ok, thanks. Yeah, I can see and reproduce the pattern, but I was wondering "why". No worries, though; I appreciate your help.
 
  • #8
Have you taken a probability course before? This is actually quite similar to Bayes's theory except that in QM the prob. density is the square of a prob. amplitude.

In QM [itex]|\langle\Psi|\Psi\rangle|^2[/itex] must equal 1 because that function means 'the probability that [itex]\Psi[/itex] is in the state [itex]\Psi[/itex]'. In any system where you are calculating probabilities before and after gaining more evidence, you have to normalize the set of probabilities so that the sum equals 1. It would be silly if there were more than 100% chance that a measurement would be one of those states. It would also be illogical if there were less than 100% chance that some value would be found. That's not physics, it is just a part of the definition of probability.

A measurement of the first qubit that shows it to be in the state [itex]|0\rangle[/itex] means that only the two state kets that you wrote are options. So it must be in a superposition of the two base kets and one would assume that [itex]\frac{a_{00}}{a_{01}}[/itex]before[itex]=\frac{a_{00}}{a_{01}}[/itex]after (in reality this isn't true because it is difficult to make a perfect measurement, but if we are assume that we have the best tools that nature allows, this is true). That and the fact that the probabilities sum to 1 means that the normalization factor must be [itex]\frac{1} {\sqrt{|a_{00}|^2+|a_{01}|^2}}[/itex].

Every time you see a quantum state written out, it is assumed that there is a normalization factor. But the initial state is always just assumed to be normalized because you can just absorb the factor into the [itex]a_n[/itex]. You only need to explicitly write one when the coefficients have already been defined.
 
  • #9
ps. are you a physics major or a compsci major? Have you taken a QM course before? The class may not be out of the league of what you can understand, but if you don't have much experience with QM it will be rough. What text are you using?
 
  • #10
I'm actually an Engineering Physics major. Never taken a QM or probability course, either. Currently a junior. We're using "Quantum Computation and Quantum Information" by Nielsen and Chuang.

I understood mostly everything in your first post. However, it seems like you just took a leap to get to the normalization factor. I feel dumb, but I still just don't understand how you got it...
 
  • #11
The probability that the state will be found in [itex]|00\rangle[/itex] is [itex]|\langle00|\Psi\rangle|^2=\alpha_{00}[/itex]. That is a postulate of QM. After the measurement of the first qubit you have a new state that is a superposition of [itex]|00\rangle[/itex] and [itex]|01\rangle[/itex]. If you measure the second qubit (and let's assume that you do it quickly enough that we can assume the state has not evolved) it will be one of those with probability [itex]|\alpha_{00}|^2[/itex] and [itex]|\alpha_{01}|^2[/itex] respectively. Those two values cannot sum to one since we know that before the first measurement they were only 2 of the 4 options (unless the other probabilities were zero, but then this is a silly example, so let's assume that each base ket had a non-zero probability). However, for this second measurement, you will definitely find the second qubit in one of the two states. There is a 100% chance that it will be 0 or 1. It is equivalent to say there is a probability of 1. The factor, [itex]A[/itex], that makes [itex]A^2|\langle\Psi_2|\Psi_2\rangle|^2=[/itex][itex]A^2\left(|\alpha_{00}|^2+|\alpha_{01}|^2\right)=1[/itex] is [itex]\frac{1}{\sqrt{|\alpha_{00}|^2+|\alpha_{01}|^2}}[/itex].

The thing that you need to realize is that the alphas (tex is giving me trouble) don't mean anything on their own. They are just the ratios for the kets. I'm going to bed, but I'll go into more detail if you have specific questions tomorrow.

Also, if this were you 3rd QM class and you had been studying linear algebra and prob. for years, then you might have reason to feel dumb. As it is, I would just be sure to read up on these things. This is the website for one of my prof.s
http://www.quantum.umb.edu/Jacobs/books.html
The chapters in the measurement book have not been edited, so the reading might be dense at points, but I think it covers everything you need to know about basic prob. and QM measurement. You might also look into the book by Raymond LaFlamme and others. If you happen to find a cheap copy, it gives better background for a beginner. Also, I think that Nielsen covers QM in more depth later in the book. I haven't read it, but I've skimmed it a few times.
 
  • #12
DrewD said:
The probability that the state will be found in [itex]|00\rangle[/itex] is [itex]|\langle00|\Psi\rangle|^2=\alpha_{00}[/itex]. That is a postulate of QM. After the measurement of the first qubit you have a new state that is a superposition of [itex]|00\rangle[/itex] and [itex]|01\rangle[/itex]. If you measure the second qubit (and let's assume that you do it quickly enough that we can assume the state has not evolved) it will be one of those with probability [itex]|\alpha_{00}|^2[/itex] and [itex]|\alpha_{01}|^2[/itex] respectively. Those two values cannot sum to one since we know that before the first measurement they were only 2 of the 4 options (unless the other probabilities were zero, but then this is a silly example, so let's assume that each base ket had a non-zero probability). However, for this second measurement, you will definitely find the second qubit in one of the two states. There is a 100% chance that it will be 0 or 1. It is equivalent to say there is a probability of 1. The factor, [itex]A[/itex], that makes [itex]A^2|\langle\Psi_2|\Psi_2\rangle|^2=[/itex][itex]A^2\left(|\alpha_{00}|^2+|\alpha_{01}|^2\right)=1[/itex] is [itex]\frac{1}{\sqrt{|\alpha_{00}|^2+|\alpha_{01}|^2}}[/itex].

The thing that you need to realize is that the alphas (tex is giving me trouble) don't mean anything on their own. They are just the ratios for the kets. I'm going to bed, but I'll go into more detail if you have specific questions tomorrow.

Also, if this were you 3rd QM class and you had been studying linear algebra and prob. for years, then you might have reason to feel dumb. As it is, I would just be sure to read up on these things. This is the website for one of my prof.s
http://www.quantum.umb.edu/Jacobs/books.html
The chapters in the measurement book have not been edited, so the reading might be dense at points, but I think it covers everything you need to know about basic prob. and QM measurement. You might also look into the book by Raymond LaFlamme and others. If you happen to find a cheap copy, it gives better background for a beginner. Also, I think that Nielsen covers QM in more depth later in the book. I haven't read it, but I've skimmed it a few times.

Ok thanks, it's starting to piece together a little better now. I'll definitely give that book a read.

So we can't say say [itex]|\alpha_{00}|^2[/itex] and [itex]|\alpha_{01}|^2[/itex] sum up to equal 1 since they were only 2 of the 4 options in the original function. But we CAN say that for the second measurement, they do sum up to 1, but with a term that preserves normalization, correct?
 

Related to I have no idea what 'Normalize'/'Normalization' means Help please?

1. What is normalization?

Normalization is the process of organizing data in a database in a way that reduces redundancy and dependency. It involves breaking down a larger table into smaller, more specific tables and establishing relationships between them.

2. Why is normalization important?

Normalization is important because it helps to eliminate data redundancy, which can lead to inconsistencies and errors in the database. It also makes the database more efficient and easier to maintain.

3. What is the purpose of normalization?

The purpose of normalization is to ensure data integrity and to reduce the risk of data anomalies. It also helps to optimize database performance and improve data organization.

4. What are the different levels of normalization?

The different levels of normalization are first, second, third, and higher normal forms. Each level has specific rules and guidelines for organizing data and reducing redundancy.

5. How do I normalize data?

To normalize data, you need to identify the relationships between different data elements and then apply the rules of normalization to break down the data into smaller tables and establish the necessary relationships between them.

Similar threads

  • Quantum Physics
Replies
22
Views
941
  • Quantum Physics
3
Replies
71
Views
3K
Replies
120
Views
7K
  • Quantum Physics
Replies
4
Views
1K
  • Quantum Physics
Replies
5
Views
704
Replies
11
Views
2K
Replies
4
Views
1K
Replies
1
Views
838
  • Quantum Physics
Replies
5
Views
1K
  • Quantum Physics
Replies
1
Views
2K
Back
Top