Quantum computing – the city and the city

Posted April 5, 2010 by qudit
Categories: Pensiveness

I was at a conference in Grenoble last year. Apart from the science, this was most memorable for the “cultural activity”: laserquest. When it came to picking teams, some bright spark suggested “physicists vs computer scientists”. Immediate shuffling on the part of most people, and then something interesting happened. We had people hesitating in the middle, unsure of which side to go to. General consensus sorted them out, and then objections were raised to some of those who had chosen immediately; “come on, of course you’re a computer scientist!” was one comment to a self-proclaimed physicist, who subsequently switched sides. I stayed and played, unchallenged, on the physics team (we lost). Back in the UK I started to tell colleagues the story, smug in my existential physics-ness, to be met with a curious gaze and the blunt question, “so which team did you play for?”.

It’s not just physics and computer science, of course. Quantum computing is a strange city, inhabiting multiple spaces. Mathematics, engineering, philosophy, and even some new tentative forays into biology. It’s a two-way process: you only have to think about how quantum computing has changed the way quantum mechanics is taught in undergraduate physics courses to see the back-reaction at work. Quantum computing and information theory has added a whole new arsenal of tools to theoretical computer science; driven advances in, for example, category theory; been the impetus behind developing semiconductor and nanophotonic technologies; and helped redefine concepts of information and knowledge of physical systems.

But what if you want to go the other way – what if you want to work on quantum computing itself, not just on how it interacts with and reflects back on one particular academic discipline? What determines whether that is physics, computer science, maths… ? Perhaps it’s the questions that are asked, that some questions are fundamentally “physics” questions, and so on. Or perhaps it’s the tools used to answer the questions – if you use computer science tools such as complexity theory then of course you’re doing computer science. Or is it a function of the background of the researcher? If the researcher is in a mathematics department then as a matter of course their research is mathematics…

The problem is, while the distinctions between the fields are clear in a lot of cases, quantum computing by its very nature sits on the cracks in the pavement. Building a quantum computer, programming it, knowing what it is capable of, this requires knowledge, skills and people from all these different areas. Often questions don’t play fair, don’t sit easily in one subject or another. For example, if we’re to design a repeater network as physicists, do we work out an interlink possibility then throw our hands in the air and say “I’m not going to look at how the links are composed, that doesn’t have a Hamiltonian in it so it’s not my business”? Of course not, because how we want to propagate information through the network determines the best physical link mechanism to use.

I say “of course not”, but yet that seems to be the norm of what actually happens. Physicists, computer scientists, engineers… , all wandering around Besźel and Ul Qoma, occupying the same problem space but studiously “unseeing” the landscape of the other fields. Designated cross-over points, generally conferences, are allowed, but try to do that in your everyday research and you’re soon in trouble. Which journals do you publish in? If you’re in a physics department, do your computer science publications count for the RAE? (answer: no). Who do you apply for jobs with? If you get a job in that department, how much does that restrict where you apply next? (answer: a lot). Who do you apply to for funding? And, looming over everything, where will you aim to get that all-important permanent position? And how much are you damaging that possibility by having a “fractured” “interdisciplinary” track record?

There are of course people who bridge the gaps between the fields, who can quite happily work with people on both sides. They all, however, tend to be established researchers with unimpeachable credentials firmly in one discipline. Perhaps that’s then the way forward – we stick dutifully to one area for a good long post-doc stretch, and at the end we’re rewarded with brownie points to go and spend on interdisciplinary projects. The problem I find is in reconciling this way of working with actually doing good science. If there’s a problem that crosses the boundaries, why not use techniques and skills from the “other” city? If the problem pushes into a crosshatched street, why not follow it? How is science served by wearing blinkers that tell us to sidestep questions that aren’t in “our” field, even though they’re directly relevant to work that we’re told is? Perhaps we could all do with a bit more Breaching of boundaries in our work – seeing the city and the city, quantum computing and quantum computing.

Quantum programming

Posted February 6, 2010 by qudit
Categories: Random research

Here are the slides of a talk I gave last week at the Bristol NSQI users seminar, about programming quantum computers using a graphical calculus. This is for a general science audience – so I assumed they knew what a bit was but not a qubit. They seemed happy with that.

BristolFeb10

QIP collected II – tensor networks

Posted January 29, 2010 by qudit
Categories: Conference thoughts

There were a number of “wow” moments during the week, but one of the best for me was when Phillipe Corboz introduced tensor network diagrams. The first half of his talk had sent me off on a completely different track, as an almost throw-away slide on the phase diagram of a pair of coupled antiferromagnets had at a stroke explained some numerical spin chain results that I’d been puzzling over for months. (The flip side of this, that the poster I was presenting two days later had an incorrect theory section, didn’t hit me till later that evening). So I was in the middle of flipping through a set of simulation results when I looked up and saw a slide full of tensor network diagrams. I was convinced I’d somehow teleported into a talk on categorical quantum mechanics. As he explained the networks in detail I realised he wasn’t talking directly about category diagrammatics, but I was sure that the similarities between the two couldn’t be coincidence. When he then introduced pink triangles to describe quantum states I was certain. It turned out, though, to be a remarkable example of parallel evolution: in the questions afterwards, he said that no-one involved had looked at any links with category theory, and that tensor network theory had evolved entirely independently.

Corboz told us that tensor networks are a numerical approximation method for 2-D lattice systems – kind of the 2-D equivalent of DMRG for spin chains. What got me particularly exciting about them is that they appear to represent an information flow through the lattice – and hence, I think, the similarities with diagrammatics. The tensor network method consists of separating the lattice into small spin-blocks, and approximating the ground state of the lattice as a tensor product of the pure states of those blocks. The coefficients of each state in the tensor product themselves form a tensor, and each are given by the entropy of the spin block, worked out using the area law. The tensor containing those coefficients is then what is tracked in network theory. This is represented diagrammatically as a blob with legs, each leg being a coefficient.

It would be interesting to figure out the exact relationship between the evolution-tracking of these coefficients and the information flow of diagrammatic QM. There are some immediate connections that can be made, in particular with the Frobenius algebra calculus, when the scalar product is considered: blobs with legs going one way hit blobs with identical legs going the other and make a closed network loop. In general the tensor network scenarios are much more complicated than the few-qubit systems usually looked at using category theory – hence that tensor networks are an approximation technique. But on the other hand category diagrammatic should work for information flow using logical qubits – in other words, large collections of physical qubits. If we could show the tensor network calculus to be an approximation of the category-theoretic one then not only would that prove the correctness of their use as an ansatz, but we could then start importing results from category theory straight into spin systems. This could potentially then give us another set of powerful tools to simulate quantum spin ensembles. And perhaps we could even turn this around: if we then use these spin systems to perform adiabatic (i.e. ground state) quantum computation, then tensor network/category theory diagrammatics could become the basis for a high-level programming language, enabling us to construct spin lattices corresponding to particular information flows. Perhaps?

We may also be able to use such an analytic basis to tensor network theory in the same way that Norbert Schuch used DMRG, to try to prove results in complexity theory. This would certainly fit with the idea in several talks (not least Scott Aaronson’s linear optics experiment) where even the approximate classical simulation of a quantum system tells us something interesting about their relative complexity classes. Schuch used the fact that DMRG works to prove that 1-D spin chains whose Hamiltonian is a combination of commuting simple Hamiltonians can be efficiently classically simulated. Again the main idea, basic to DMRG, is to represent spin blocks by area-law entropy – here called matrix product states. I would imagine that 2- and 3-D spin lattices fall into different complexity classes than 1-D ones – and hence a similar analysis of tensor networks might yield more results in complexity theory. We then come back to what I discussed in my previous post: the links between physical dynamics and complexity classes. To a physicist who is used to seeing complexity theory beyond P and NP as a curious form of stamp collecting, this is extremely interesting!

As a final note on spin systems, we had Maarten van den Nest also talking about matrix product states (and, by extension, DMRG). He was defining a new class of quantum states and evolutions as “computationally tractable” for classical simulation. Amongst computationally tractable states are MPSs and stabiliser states; an interesting juxtaposition, as we already know (in the sense of can prove) that stabilisers are efficiently classically simulable. Stabiliser states are also one of the easiest entities to represent pictorially as Frobenius algebras – perhaps another route into connecting category theory with tensor networks, via matrix product states?

QIP collected – I

Posted January 27, 2010 by qudit
Categories: Conference thoughts

QIP2010 was awesome. There really is no other word for it: it was like being plugged into the mains for a week, and I’m still trying to take in everything. There was the sheer insanity of Zurich for a start – not only the prices, but also being stopped on the street by a (apparently perfectly sober) man wanting to eat my companion’s liver. Then there was Cafe Voltaire where the Dadaist movement started (complete with skinned cat over the mantlepiece), where on the first night we inadvertently crashed someone’s 30th birthday party. Along the way I learned how to skate, what semi-definite programming is (well, sort of), and how to get grant funding for quantum animal husbandry. Good times.

It was also the first time that I’d properly live-tweeted a whole conference, which in itself was certainly an experience. I now want to try to bring together some of the threads that emerged during the conference – and I think I need a bit more than 140 characters to do that. This will be quite long, so I’ll do it in installments.

For me the highlight was definitely the emerging idea that I think of as “experimental complexity theory”. This is the notion (which cropped up in a couple of places) that there are potentially some experiments that could be performed to determine results in complexity theory. It’s very speculative and I have no idea if it would really fly in practise, but it’s incredibly exciting. The first time I noticed this appearing was in the context of spin lattice simulations. There were quite a few talks on spin systems, which I hadn’t realised were so interesting to complexity theorists – even though I’ve been working on spin chain simulation for a while now! Dan Gottesman gave the second talk of the conference on using spin lattices to encode simulation information about a set of rules in order to find its complexity. Each vertex (spin) on the lattice becomes a boolean variable, each edge (the interaction Hamiltonian between the spins) encodes the rule concerning those variables. The complexity of that ruleset then becomes the complexity of the problem of finding the ground state of the spin system.

What was particularly interesting was the comment he then made about spin glass systems ‘finding their own ground state hard to compute’ – which comes out physically as a long relaxation time into the ground state. This was a fascinating connection between a complexity class – calculating the ground state of a spin glass is QMA-hard – and actual physically measurable behaviour. This got me thinking about whether it would be possible to ever get an experimentally viable spin material where the spin-spin interactions are capable of individual modulation. Then, possibly, perhaps, it could be calibrated with problems of known complexity and then used to time the relaxation into the ground state for unknown problems – thus giving a directly accessible measure of the complexity class? Of course I can think of plenty of problems with doing this in practise – although I’m reliably informed by a handy experimentalist that individually addressing the spins to produce different interactions isn’t as hard as I thought in both NMR and quantum dot systems.

Julia Kempe and Roderich Moessner also discussed spin systems as essentially a physical instantiation of problems in complexity theory, this time in terms of the satisfiability of the ruleset. They seemed to be concerned with the simulation of such systems – Moessner pointing out that QSAT, the class of general spin lattice systems, is known to be a hard optimisation problem. This would, though, fit in nicely with an idea of using these systems to encode unknown problems (this time satisfiability problems) and watch as the system relaxes into the ground state. On a slightly more practical level, this seems to be an interesting simulation-type way of computing these problems – mapping rulesets directly onto Hamiltonians and then waiting for the system to relax to the ground state, rather than programming a quantum gateset or measurement pattern. Kempe discussed frustrated spin systems as well, stating that these corresponded to the mutual unsatisfiability of the constraints. It would then be interesting to turn this around: programming a spin system with constraints to find their mutual satisfiability. One thing I wasn’t sure about from Kempe’s talk was exactly what was meant by “dimers” in this context. For the spin systems I’ve been dealing with, it means two spins in a singlet state. She was using it to mean “matching condition”: that two spins were associated with the same clause and only that. If that really is the analogue of the singlet-state definition then this is fascinating as she later talked about a phase transition happening between SAT and un-SAT when leaving the “dimer” state. There are phase transitions in spin systems that crop up whenever the ground state leaves a product state; it would be intriguing to be able to categorise these in terms of the transitions between different complexity classes.

The most worked-out discussion of “experimental complexity theory” came not in the form of spin systems, but rather linear optics. Scott Aaronson introduced what seemed to be in essence an experimental test for the collapse or otherwise of the polynomial hierarchy. Which,, he informed us, is about as crazy a statement as saying P=NP. In other words, this is a big one. Even after getting his slides from the web I’m still not sure I understand exactly what he’s on about – largely, I think, because I have difficultly keeping the different complexity classes apart in my head. The motivation for the work is to produce an experimental set-up that could potentially falsify the extended Church-Turing thesis (the thesis that all physical systems are classically simulable in polynomial time and resources). The choice is a linear optical system, and the question becomes: can we classically simulate the output distribution of photons from the circuit? The particular circuit and associated complexity problem that he’s chosen mean that even approximate efficient classical simulation of the output distribution would entail collapse of the polynomial hierarchy. This then becomes a powerful argument for the non-simulability of this quantum system (at least for a computer scientist) as the collapse of the PH is considered so insane. And if this system is not classically simulable, then experimentally instantiating the particular linear-optical circuit given is a refutation of the extended Church-Turing thesis.

I’m not qualified to judge (or understand!) the intricacies of BPP^NP, #P and so on that’s needed to make this hang together. Assuming all that works, and I’ve got the thread of the argument correct then, if such a circuit is experimentally realised, we’re left on the horns of a fascinating dilemma. We either have to accept the collapse of the polynomial hierarchy, or the incorrectness of the extended Church-Turing thesis. That’s pretty good for one experiment! The best part, I think, is the fact that we don’t have to prove anything about the classical simulability of that particular circuit to get to the point that we’ve got to give up on either the PH or x-CTT: the existence of a physical realisation of that circuit means they’re mutually incompatible. Personally I think that’s by far the most interesting way of phrasing this result. If we delve into the questions of whether or not we can efficiently classically simulate this then we get bogged down in questions of how we can prove a negative (that it isn’t possible), and the reasonableness of assuming that the PH doesn’t collapse in order to argue for the incorrectness of x-CTT. Of course, if we *can* find a classical simulation of the output then we’ve collapsed PH no problem. But we don’t need to: it’s an incredible thing that simply performing this experiment would mean giving up either the polynomial hierarchy or the extended Church-Turing thesis. It doesn’t get bigger than that, really.

First post. w00t.

Posted January 24, 2010 by qudit
Categories: Uncategorized

You’ve probably got to this site through my Twitter feed. Twitter is great, but has an obvious drawback if you want to communicate extended lines of thought. So this is the site where I get to be self-indulgent beyond 140 characters at irregular intervals.