- #1
nomadreid
Gold Member
- 1,677
- 210
I am not sure whether this question should go in Quantum Physics, Computers, or Linear Algebra. I am willing to see it moved if appropriate.
In Nielsen and Chuang's "Quantum Computation and Quantum Information", in explaining quantum parallelism in section 1.4.2 (as a preparation for Deutsch's algorithm), he introduces the "black box" (later in the book he is more explicit about its construction) Uf such that: if f is a function from {0,1} to {0,1}, and x, y are qubits, then Uf (x,y) = (x,y⊕f(x)) where ⊕ is addition mod 2. This definition confuses me, for two reasons:
(a) If the domain of f is bits, then how can we talk of applying it to a qubit x?
(b) If the range of f is bits, then what is the manner in which we add it to a qubit mod 2? For example, how does one calculate (1/√2)(|0> +|1>) ⊕ 1?
Thanks for any indications. This is stopping me from clearly understanding the algorithm.
In Nielsen and Chuang's "Quantum Computation and Quantum Information", in explaining quantum parallelism in section 1.4.2 (as a preparation for Deutsch's algorithm), he introduces the "black box" (later in the book he is more explicit about its construction) Uf such that: if f is a function from {0,1} to {0,1}, and x, y are qubits, then Uf (x,y) = (x,y⊕f(x)) where ⊕ is addition mod 2. This definition confuses me, for two reasons:
(a) If the domain of f is bits, then how can we talk of applying it to a qubit x?
(b) If the range of f is bits, then what is the manner in which we add it to a qubit mod 2? For example, how does one calculate (1/√2)(|0> +|1>) ⊕ 1?
Thanks for any indications. This is stopping me from clearly understanding the algorithm.