What is quantum computing research mostly focused on?

In summary, quantum computing research is primarily focused on two main areas: theoretical research into complexity classes, algorithms, and programming languages, and engineering and development of physical realizations of qubits. There is also a strong emphasis on finding ways to reduce the number of qubits needed for error correction and simulation, as well as exploring different quantum architectures. While there are still challenges to overcome, the fundamental ideas and concepts needed for quantum computing have largely been established. However, there is still much work to be done in terms of engineering stable, cost-effective, and controllable qubits for practical use. Additionally, the process of quantum process tomography, which involves testing the functionality of entire quantum circuits, presents a significant challenge due to the probabilistic nature of
  • #1
Domenico94
130
6
What is quantum computing research mostly focused on? I mean, is it mostly about a physical point of view ( Like building better quantum transistor, or better quantum diodes, or, for example, using entalgment effect, to achieve better purposes), or is it mostly focused with quantum architectures (a bit like Von Neumann's and Alan turing's work in the 1940s) and algorithms?
 
Physics news on Phys.org
  • #2
There's highly theoretical research about complexity classes. Scott Aaronson blogs about that kind of thing. For example, see his post about the proof that QMA(2) ⊆ EXP or his post about a better total query complexity separation between quantum computers and BPP.

There's algorithmic research that will probably be extremely useful in practice, at least at first, e.g. to reduce the number of qubits needed to do error correction or to simulate particular types of matter. And programming language research into what will be convenient when doing quantum programming. Needing 100 qubits instead of 200 qubits is a big deal when your goal is to double the number of qubits you can keep coherent each year and that's considered ambituous.

And of course there's engineering and development; trying to create physical realizations of qubits that are held stable enough, cheap enough, and controllable enough to be used as quantum computers.

I think the extremely fundamental "What are the ideas we need to make this work?" Alan Turing-ish and Von Neumann-ish work is mostly done or at least sufficiently done, though I don't really know enough about things to say that with confidence. Interestingly the Claude Shannon-ish work is NOT done yet. There's still a lot to do. But mostly I think practical application is held up by the engineering challenges of invoking a good qubit into being.
 
  • #3
This document is getting old, but I think it will still give you an idea of the research going on:
P. Zoller et al., http://www.tnw.tudelft.nl/fileadmin/Faculteit/TNW/Over_de_faculteit/Afdelingen/Quantum_Nanoscience/Research/Research_Groups/Quantum_Transport/Group/People/Personal_pages/Leo_Kouwenhoven/doc/Quantum_information_processing_2005.pdf, Eur. Phys. J. D 36, 203 (2005)
 
  • #4
Wiki on quantum tomography claims that the work of QPT increases exponentially with the number of qubits involved and hence is an infeasible task for sizeable number of qubits. I think that this clearly implies the infeasibility of practical quantum computing, since one needs to verify the correctness of the circuits of quantum computer hardware in its design, manufacture and maintenance.
 
  • #5
mok-kong shen said:
Wiki on quantum tomography claims that the work of QPT increases exponentially with the number of qubits involved and hence is an infeasible task for sizeable number of qubits. I think that this clearly implies the infeasibility of practical quantum computing, since one needs to verify the correctness of the circuits of quantum computer hardware in its design, manufacture and maintenance.

The computer you wrote that response on probably has more than 1 GiB of memory. That means your computer can be in ##2^{8^{30}}## states. I'd write out that number, but it has a hundred quadrillion digits. Your computer can be in more than ##10^{26}## states! That is insane, right? No one could ever have tested that many!

In fact, because the number of states is so ridiculously high, no one has checked that even 0.000000001% of the states work! Therefore, by the same logic you're using to dismiss quantum computers, computers like the one you wrote your response on must be failing 99.999999999% of the time and you in fact did not succeed in posting. Odds are the information revolution is nothing but a shared delusion. /s

In case you're not catching my drift, I'll be explicit: it's not necessary to check every possible state or state transition to get something working in practice. Engineering mistakes have tendency to break huge swaths of the state space, so that checking the first 0.0000000000000000000000001% of states eliminates the first 99.9% of errors. Furthermore, really subtle errors that only affect say 0.0000000000000000001% of the states often don't matter in practice because users never encounter them. Really tricky errors have to be rare enough that you can't quite test enough on your own to find them, but a million users can. There's a substantial number of such errors, but not enough of them that computers don't work well enough to get useful things done.
 
  • #6
@Strilanc: Even for classical hardware, there are constantly checks involved when your computer is running, only that you are not aware of them. One of them is the correct writing to disk storage when you save files. Because the classical computer hardware has been highly improved during the decades, you would seldom see messages of irrecoverable errors . (Nevertheless several times in the past my PC suddenly broke down such that it had to be rebooted. In the nineteen seventies, computer centres with their mainframes (PCs were not yet born) had to reserve a couple or more of hours each day for maintenance, during which no normal jobs could be run.) But quantum hardware is different from classical hardware because, if one in the tests checks the input qubits of a gate somewhere in a circuit, they are no longer what they were and additionally because the qubits have probabilities as their characteristics, in distinction to the classical bits. That very much complicates the testing in my view. If I don't err, one has to do a sufficiently large number of tests of a circuit, each time with input qubits satisfying certain specifications (that by itself, i.e. assuring the uniform quality of the sources of qubits, is not trivial, I am afraid) and testing the output qubits in order to determine whether their probability characteristics satisfy the desired specifications. (Being a layman, I have to admit that my current knowledge has not yet covered how one proceeds to concretely determine these probabilities, since they are in general complex-valued quantities and not real-valued quantities.) I believe that these facts underly the exponential increase of work of quantum process tomography with the number of qubits involved, as claimed in the Wiki article. (We need to ascertain the correct functioning of entire circuits and not simply that of the individual gates contained in them, hence the word "process" in its name, I suppose.)
 
Last edited:
  • #7
If you think circuit verification is an exponential wall that will stop quantum computers from working, then you need to learn more about how we actually intend to make these things. The fact that experts are actually bothering to try to engineer quantum computers should be an indication that they don't think problems obvious to a layman are insurmountable. You should familiarize yourself with a problem and proposed solutions before pronouncing it insurmountabe.

If we were approaching the design problem by making random circuits and checking that they do what we want, then verification would in fact be impossibly difficult. But no one is going to use a design strategy that dumb. We're going to break the problem down into simple pieces, verify that those pieces work, prove that putting them together in particular way should work, then put them together, check that it worked, and iterate. Just like we did with every other complicated system ever.

The basics of making large quantum circuits out of simple pieces was figured out twenty years ago. We can use T, H, P, and CNOT gates to approximate any desired operation arbitrarily well. The gate error and approximation error at each step compound additively instead of exponentially. The cost of making the errors small enough, by using more gates in the approximation and for quantum error correction schemes, is only polylogarithmic in the size of the computation.

What's been holding us back isn't the size of the state space, it's getting the simple gates good enough.
 
Last edited:
  • #8
@Strilanc: Of course I, as layman, has to accept/follow experts' opinons. But is there any reason for me as layman to doubt the words of the author of the Wiki article, who apparently is an expert, that QPT is an infeasible task for sizable number of qubits? My deduction from that information to the infeasiblity of quantum computing as such you may certainly question as to its logical validity, but I don't yet clearly see your concrete points in that aspect such that I could attempt to counter-argue eventually. (I did though give my personal arguments that the correctness of any entire circuit (dedicated for a specific function) of a quantum hardware should be verified as a single entity and that verification of its components alone is unlikely to be sufficient for that purpose.)
 
  • #9
mok-kong shen said:
@Strilanc: Of course I, as layman, has to accept/follow experts' opinons. But is there any reason for me as layman to doubt the words of the author of the Wiki article, who apparently is an expert, that QPT is an infeasible task for sizable number of qubits? My deduction from that information to the infeasiblity of quantum computing as such you may certainly question as to its logical validity, but I don't yet clearly see your concrete points in that aspect such that I could attempt to counter-argue eventually. (I did though give my personal arguments that the correctness of any entire circuit (dedicated for a specific function) of a quantum hardware should be verified as a single entity and that verification of its components alone is unlikely to be sufficient for that purpose.)

Your belief that full QPT of every circuit is a necessary step for practical quantum computing is simply wrong. QPT is neither necessary nor sufficient for quantum computing to be practical. It's a separate thing.
 
  • #10
@Strilanc: I should appreciate some elaborations of your claim.
 
  • #11
If you want more than the high level details I've already given, you should buy a quantum computing textbook. It's a lot of information to explain.
 
  • #12
@Strilanc: Actually I recently asked a physicist working in quantum computing research how one could know that, if a hardware quantum gate (not a more complex circuit!) is speicified to deliver output qubits of certain states, in case the input qubits are of certain other states, the hardware indeed satisfies the given specification. He referred me to quantum tomography. (The some 10 textbooks on quantum computing I had looked at as I posed that question to him didn't even mention quantum tomography.)
 
Last edited:
  • #13
mok-kong shen said:
@Strilanc: Actually I recently asked a physicist working in quantum computing research how one could know that, if a hardware quantum gate (not a more complex circuit!) is speicified to deliver output qubits of certain states, in case the input qubits are of certain other states, the hardware indeed satisfies the given specification, he referred me to quantum tomography. (The some 10 textbooks on quantum computing I had looked at as I posed that question to him didn't even mention quantum tomography.)

... How does that support your point?

I told you to read a quantum computing textbook because they contain constructions for building complicated operations out of simple gates while guaranteeing bounds on error, not because they'll explain how to do testing.

I'm just trying to correct your mistaken belief that the only possible kind of testing is uninformed black-box brute-force integration testing, so useful testing is impossible, so engineering complicated systems is impossible.
 
  • #14
@Strilanc: Please kindly name a textbook, page no., and cite a few lines concerning the method of testing the states of qubits of the outputs of a quantum gate. Quantum error corrrection alone is not sufficient IMHO, unless you could cite something to the opposite. (In fact, consider a design that is wrong even in theory, i.e. it actually does something different from what the designer wants, there is no error in transmission etc. etc., how would you know that the hardware is not in accordance to its specification?)
 
Last edited:
  • #15
mok-kong shen said:
@Strilanc: Please kindly name a textbook, page no., and cite a few lines concerning the method of testing the states of qubits of the outputs of a quantum gate. Quantum error corrrection alone is not sufficient IMHO, unless you could cite something to the opposite. (In fact, consider a design that is wrong even in theory, i.e. it actually does something different from what the designer wants, there is no error in transmission etc. etc., how would you know that the hardware is not in accordance to its specification?)

We are clearly talking past each other instead of making progress, because my argument is that the difficulties you're worried about aren't difficulties. Text books don't address them, they go along totally different routes, so if I cited page numbers you'd complain that they were about approximating arbitrary operations with a gate set instead of about verification.

Anyways, here's a recent paper about measuring qubit error and scaling up to useful quantum computers. It has seven "technology levels", kind of like a roadmap. Process tomography is explicitly dumped at level 2:

Process tomography can be performed for multiple qubits, but is typically abandoned because (i) the number of necessary measurements scales rapidly with increasing numbers of qubits, (ii) information on error coherence is hard to use and (iii) it is difficult to separate out initial-ization and measurement errors

Note that this isn't an argument against making it past level 2, it's an argument against using quantum process tomography. That's why there's 5 more levels, instead of "and therefore we give up".
 
  • #16
@Strilanc : In my view verification subsumes error correction. Error correction deals with unavoidable noises while verification asks whether at a certain test moment a circuit or gate as such functions exactly as specified. Problems may occur due to design mistakes (e.g. some infrequently occurring situations were not properly taken care of), faults in the manufacturing process (errors of workers or machines), material aging and, last but not least, wrong handling by the maintenance people. These are clearly not in the realm of error correction (cf. error correction codes used in classical computing). Thus, as I wrote earlier, quantum process tomography is necessary in combination with quantum error correction measures to achieve correct quantum computing in practice. (In other words, your mentioned level 2 is the least to be achieved for any practically useful quantum computing. If that's infeasible (according to Wiki), then further efforts in research would be waste of resources.)
 
Last edited:
  • #17
A full process tomography would tell you everything about the computation, some of which you don't need to know because not all circuits are created equal. For examples, some highly nonlocal gates probably can't even be realized in nature. But other than that, under some assumptions, there are ways to certify whether a full tomography is needed without doing the tomography. http://arxiv.org/abs/1104.4695, http://arxiv.org/abs/1104.3835, and subsequent works that cite these are examples.
 
  • #18
@Truecrimson: At the beginning of the paper, it says that the method is only applicable to pure states. Couldn't that be a quite severe limitation in the context of practical quantum computing in general?
 
  • #19
That would be quite a severe restriction indeed. But there are other subsequent works. Things like blind quantum computing and bootstrapping uses a small or trusted quantum computer to verify that a larger or untrusted quantum computer works as advertised.

But do you agree with my first point (which is Strilanc's as well) that in reality you never have completely no knowledge of what is going on in your experiment? A full tomography would be required (even in the classical case) if you have zero prior information, but that's not the case. So the fact that tomography is infeasible does not imply that computation is infeasible.
 
  • #20
@Truecrimson: Nothing in the real world is perfect, so certain probability of errors and the corresponding risks of damages etc. have always to be accepted, depending on the particular situation one (in contrast to other people) is in and one's own viewpoints ("philosophy"). In other words, one's (free) decision may well be different from that of others. In the present context of (future) practical quantum computing I like to say that I personally wouldn't use the services of a quantum computer, of which it is known that in its design, manufacture and maintenance phases there is absolutely no (practically feasible) way provided to experimentally verify the correct functioning of an arbitrarily chosen component of the hardware in case of need or desire.

I doubt that I correctly understand your term "prior information" above. What is that? Is that, for example, a label on a piece of hardware that was put on it by its manufacturer saying that it is something for a certain purpose (and that manufacturer is very famous), or what?
 
Last edited:
  • #21
I mean that, a full tomography is required only when we have completely no idea what the process in our experiment is doing, which is not true. We know roughly what the experiment is doing, that is the "prior information," which of course does not perfectly match reality but that is very far from knowing nothing.
 
Last edited:
  • #22
@Truecrimson: Any scientific design is based on theory which may very well be sound. So one knows certainly what a piece of hardware is doing. But is it indeed doing that? The designer could have made a mistake (e.g. in a technical drawing), the manufacturer could have used wrong or poor materials and the maintenance people could have accidentally done something inappropriate on it. I am not demanding doing frequent checks which would be against economy and thus foolish but there should "exist" possibilities of performing checks at some acceptable costs and within acceptable timeframes, etc. in case reasonable doubts surface that a certain computational result appeared questionable not because the programming was wrong but probably due to some hardware problems.
 
  • #23
There are reasonable checks, and they are not full quantum tomography. Full quantum tomography is unreasonable.
 
  • #24
  • #25
Strilanc already did, and I'm not sure what is it that you want more. Full quantum tomography gives us much more information than we need. It needs an exponential amount of resource even to write down the result of the tomography.

By the way, what is known to be infeasible is full quantum tomography. That's why I keep writing the word "full" in my reply. You might have associated the infeasible full quantum tomography with any procedure to verify and validate computation, which could also falls under the rubric of quantum tomography. Sure, some kinds of quantum tomography will be needed, but not the full one.

I gave some earlier papers that thought about this problem. You really should put those into Google Scholar to see other related works like compressed sensing, randomized benchmarking, or bootstrappings, which have to come with some ways to quantify errors and certify that the method really works. Admittedly, I don't work directly in this area, so I couldn't provide a better, more realistic perspective on this.
 
  • #26
Lacking knowledge, I couldn't understand much of what is written in Martinis's paper (reference given by Strilanc), not to say to attempt to counter-argue. However, I have the impression that the manner with which quantum tomography is mentioned at two places of the paper is a little bit biased or anyway not very clear. One first reads there:

"In level 1 ... Quantum process tomography is often performed on one- and two-qubit gates, which is important as it proves that proper quantum logic has been achieved. ..."

I suppose that readers of it would afterwards (highly likely) not have any doubts on a similar "importance" of QPT for more-than-two-qubit gates. (I personally think that QPT is extremely important for a really solid proof of correctness of any quantum gate or circuit, but this is of no relevance for the present post of mine.)

Next one reads from the paper:

"In level 2 ... Process tompgraphy can be performed, but is typically abandoned because (i) the number of measurements scales rapidly with increasing number of qubits, (ii) information on error coherence is hard to use and (iii) it is difficult to separate out initialization and measurement errors. ..."

In my view (a) this doesn't (clearly) tell the reader whether for cases of more than 2 qubits quantum process tomography remains important for the proof of "proper quantum logic" just as in the case of one or two qubits, (b) this (in my interpretation) is in fact merely a more detailed formulation of the sentence in Wiki on quantum tomography claiming that QPT is practically infeasible for cases of larger number of qubits and consequently, if the "importance" of (a) is true, then the stuffs of the higher levels treated in the paper would lose their significance.
 
  • #27
One area of particular interest in quantum research is Quantum Key Distribution (QKD). Using modern cryptographic protocols,(RSA, DES, DES3, AES), which do not offer secure key- exchange methods, QKD can provide a tamper-free key exchange between two parties.

Before sending a message (text or data), an encryption method is used which is generated from a cipher, which in turn is dependent on an initial value function which is termed the 'seed' of the 'key'! The encrypted message is then sent, which will need to be decrypted using the same 'key' by the receiver. The issue is not the data, but the key-exchange.. Was it sent before the data and intercepted? Perhaps the key was hidden inside the data itself and intercepted. And if so, how can you determine whether the message has gone through a middle-man?

This prompts QKD protocols, such as BB84, which are dependent on photon polarizations. A bases is chosen, either diagonal or rectilinear, by both sender and receiver. Using a simple tabulated method, one can formulate a secure method of key-exchange between two parties.

Photons are unique in that the act of measurement will change their state, thus change the system. If the key arrives in a different state to that sent by the sender, the message can is assumed to have been tampered with - thus making QKD the safest method of key-exchange.
 
  • #28
mok-kong shen said:
Lacking knowledge, I couldn't understand much of what is written in Martinis's paper (reference given by Strilanc), not to say to attempt to counter-argue. However, I have the impression that the manner with which quantum tomography is mentioned at two places of the paper is a little bit biased or anyway not very clear. One first reads there:

"In level 1 ... Quantum process tomography is often performed on one- and two-qubit gates, which is important as it proves that proper quantum logic has been achieved. ..."

I suppose that readers of it would afterwards (highly likely) not have any doubts on a similar "importance" of QPT for more-than-two-qubit gates. (I personally think that QPT is extremely important for a really solid proof of correctness of any quantum gate or circuit, but this is of no relevance for the present post of mine.)

Next one reads from the paper:

"In level 2 ... Process tompgraphy can be performed, but is typically abandoned because (i) the number of measurements scales rapidly with increasing number of qubits, (ii) information on error coherence is hard to use and (iii) it is difficult to separate out initialization and measurement errors. ..."

In my view (a) this doesn't (clearly) tell the reader whether for cases of more than 2 qubits quantum process tomography remains important for the proof of "proper quantum logic" just as in the case of one or two qubits, (b) this (in my interpretation) is in fact merely a more detailed formulation of the sentence in Wiki on quantum tomography claiming that QPT is practically infeasible for cases of larger number of qubits and consequently, if the "importance" of (a) is true, then the stuffs of the higher levels treated in the paper would lose their significance.

At this point you are approaching parody. Calling the paper "a little bit biased" because it only mentions quantum tomography twice actually made me laugh out loud.

It's like... you randomly got this idea in your head that full QPT was an important step, read that it scaled poorly, and despite being repeatedly told that it's not an important step and being given high-level explanations of why, you just keep on trucking despite admitting you don't know anything about the field.

Quantum computers will not be randomly-chosen magic black boxes that can only be tested in one big piece. Requiring full QPT of a quantum circuit is equivalent to refusing to buy a phone until you get a certificate stating that every memory state has been tested individually. It's massive absurd obscene unnecessary [insert 10 more adjectives] overkill.

I'm going to arbitrarily say that this is my last reply. I'm actually starting to think you're a troll. Regardless, it's clear that progress isn't being made.
 
  • #29
@Strilanc: It's certainly anybody's freedom to refrain from replying anything for any reasons. I just like to stress the fact that in my earlier post I wrote that the paper referred to QPT at two places and went on to quote these two places that I meant. There was no intention at all to say "It only mentions QPT twice" and I also had not the imagination that "it only mentions QPT twice" could eventually serve to assist my argumention. To be honest, I didn't very carefully read that paper, hence I even don't know, as I am writing this current post, whether the term QPT also occurred at some positions of the paper later than the two of my citation or not.
 
  • #30
This thread is going around in circles and has drifted away from the OP's question. Time to close.
 

Related to What is quantum computing research mostly focused on?

1. What is quantum computing research mostly focused on?

Quantum computing research is mostly focused on developing and improving quantum computers, which use quantum bits (qubits) to perform calculations and solve complex problems at a much faster rate than classical computers.

2. What are the potential applications of quantum computing?

The potential applications of quantum computing include improving encryption and data security, simulating complex systems such as chemical reactions, and optimizing processes in fields like finance and healthcare.

3. How does quantum computing differ from classical computing?

Quantum computing differs from classical computing in that it uses quantum bits (qubits) instead of classical bits, allowing for the representation and manipulation of multiple states at once. This allows for much faster processing and the ability to solve certain problems that are intractable for classical computers.

4. What are the current challenges in quantum computing research?

Some of the current challenges in quantum computing research include improving the stability and coherence of qubits, developing error correction methods, and finding ways to scale up quantum systems to handle larger and more complex problems.

5. How can quantum computing impact our daily lives?

Quantum computing has the potential to impact our daily lives in many ways, such as improving the speed and security of online transactions, optimizing transportation and logistics, and aiding in drug discovery and medical research. It may also lead to advancements in artificial intelligence and machine learning.

Similar threads

Replies
4
Views
6K
  • STEM Career Guidance
Replies
11
Views
797
  • Quantum Physics
Replies
2
Views
1K
  • Quantum Physics
2
Replies
41
Views
3K
  • Quantum Physics
Replies
2
Views
1K
Replies
2
Views
2K
  • Quantum Physics
Replies
11
Views
2K
  • Quantum Interpretations and Foundations
6
Replies
204
Views
7K
Replies
2
Views
2K
  • Quantum Physics
Replies
1
Views
727
Back
Top