Found in 4 comments on Hacker News
ahelwer · 2020-07-22 · Original thread
I will answer this question in two ways: one, I will tell you how I learned, and two I will tell you how I would have liked to have learned with the benefit of hindsight. Different people will value one more than the other, but both are more valuable than a giant list of resources with zero guidance where to start.

How I learned:

I started out like you, in possession of an undergraduate education in computer science. I began reading Quantum Computer Science: An Introduction[0] by N. David Mermin. This is a very good textbook, but I absolutely could not skim it. I had to ensure I understood every single line before moving onto the next. I had the impression I wasn't learning very quickly, when in fact (due to the textbook's density) I was taking in a huge amount of information.

After a few weeks with the Mermin textbook, I bought Quantum Computing for Computer Scientists[1] by Yanofsky & Mannucci. This is a much softer introduction than Mermin, almost too soft: I skipped the first few chapters on linear algebra and complex numbers. However, in combination with the Mermin textbook, I acquired a good understanding of quantum computing basics. It was at this point I reached my own personal threshold for feeling I "understood" quantum computing.

People often recommend Quantum Computation and Quantum Information by Nielsen & Chuang (also called "Mike & Ike") for beginners. I believe this is not good advice. Had I tried to learn from that textbook, I would have failed. However, it is an excellent textbook after you already understand the basics. Anecdotally, I knew two people who tried to learn quantum computing at the same time as me: one used Mike & Ike, and the other used a book called Quantum Computing: A Gentle Introduction. Neither of those people understand quantum computing today.

How I wish I had learned:

My experience learning quantum computing required a huge amount of mental effort, and in the end what I learned wasn't actually complicated! So, I created a lecture called Quantum Computing for Computer Scientists[2] which is the lecture I wish I'd had access to before trying to read any textbooks. The lecture is popular and well-received, and I think it covers all the stuff that's really conceptually tricky; once you're over those conceptual hurdles, you can apply your regular computer science skills to learn everything else about quantum computing you need (how specific algorithms work, etc.) Thus my "hindsight" study guide is as follows:

1. Watch the lecture I created.

2. Watch Professor Umesh Vazirani's lectures on quantum computing; they flesh out my lecture and he is a tremendously effective explainer of concepts (these are scattered around YouTube but you can find a full playlist at [3])

3. Concurrently, work through the first few chapters of either the Mermin or Yanofsky textbooks

4. After you feel you understand the quantum computing basics, pick topics which interest you from the Nielsen & Chuang textbook

5. Stick around quantumcomputing.stackexchange, reading questions & answers, asking your own, and maybe eventually answering your own!

Good luck!

P.S. I've also heard good things about the Quantum Katas:



[2] + slides


platelminto · 2020-04-06 · Original thread
I'm studying Quantum Computing for Computer Scientists [1] - until I found this book, I thought anything covering quantum would be too physics oriented. As the title implies, this book is nothing like that, covering all the mathematics needed (matrices and relevant operations) to then understand various topics within QC ranging from Algorithms to Programming Languages to Cryptography, all in largely self-contained chapters.

I'm currently working through the Algorithms chapter, which builds up from Deutsch's Algorithm [2] all the way to Shor's Factoring Algorithm [3], but I will definitely end up going through most of the chapters.




johnbender · 2017-01-15 · Original thread
Quantum Computing for Computer Scientists

Note, you have to be willing to put the time in, especially if your linear algebra is rusty or (like me) you have only a passing familiarity with complex numbers.

With that in mind, it's almost entirely self-contained and you can immediately start to make connections with classical computing if you're familiar with automata.

I've been interested in learning about quantum computing for a few years now and this book finally got me going.


As an aside it's a really great excuse to try out one of the many computer algebra systems out there. I gave Mathematica a trial for fun since I'd already used SageMath in the past.

Think of it like this.

What if you represented your computer bits as a arrow on a compass. You have also the plane that the compass is on.

The arrow points in one direction for 1 and in another direction for zero.

Regular computation would have you doing operations on bits. You could only do one operation at a time on one bit and you are able to flip them given a recipe.

A probabilistic computer allows you to do some special things. For example you have this magical random source where you can add a magic variable. This magic variable is a random variable. This is used in the following algorithm. We call it fermats little theorem.

Given a prime how do we test its primality? We would have to at minimum divide each number lower than itself? (there are optimizations I am glossing over for this exercise).

What if we use this magical random variable. It can be a number from 0 to a natural number?

If we have a few other algorithmic tools we can make a probabilistic search of a array. With a random variable we can do the following operation

findingA_LV(array A, n) begin repeat Randomly select one element out of n elements. until 'a' is found end

This will search a array A randomly and works with probability 1 (this is very important... if your probability is 0 then you probably don't have a working algorithm ;) )

Whats the advantage? Well, instead of having a sequential search of an array you can randomize the search. The probability of success is 1! So, if we are really lucky we could win very fast. If the array is SMALL then that might be even better right? If the array has one element then we will always win but thats not fun :(

But, a more useful example is the fact that quicksort with a randomized sort can get FAST {O(nlogn)}

Now, lets make a quantum leap. Lets go faster than FAST. Lets go ridiculous.

Lets say that we have a new magical bit. People call it the qubit. The more you have the better you are off.

Superposition can be described as follows. You can correlate two qubits together magically. Algorithmically no one cares how . On the Engineering side the ability to do this is worth trillions of dollars though. The advantage has the following feature.

If we want to factor a number we can do it faster than any known classical algorithm. By classical I mean following the rules of your standard computer.

Quantum computers allow you to BREAK the rules of classical ones. They do this by doing the following things.

Lets review the two computers we have discussed before.

The first kind of computer is called a Turing Machine or a Recursively Enumerable Language / Problem. This basically follows rules due to a finite control (a CPU basically) and has a tape which you can write to, go left or right.

The second kind of computer has the magical randomness. This is a randomized algorithm. If you are feeling lucky you might get lucky and this reduces your runtime. In the average case you probably win more often than you lose hopefully.

Our magical quantum compuer allows for correlation of two or more "variables" (called qubits) . How is this better?

In quantum physics when you observe a particle you "collapse its wave state". This is making an observation. This is analogous to a return variable in any C like language. The difference is how it returns and while in your function it does some magical things.

The magic is as follows. A qubit can be observed in something called a computational basis. What does that mean? It means you make two vectors in the "plane" or the "complex plane" or basically you have a 3d arrow that points wherever you want. They call this the bloch sphere.

With this magic you can do magic tricks. This magic trick is called sorting a UNSORTED database in O(sqrt(n)). This sounds ridiculous right?

You are performing operations that are LESS than the size of the array... You aren't LOOKING at every entry.

This is ridiculous. How do they do this?

Instead of thinking of the database as an array think of it as a function.

We are trying to invert the following function. Encode all of the elements of the database as a vector in an "n-dimentional state space" this will require a logarithmic amount of qubits.

Given a function which returns true or false if the element is there you can write an algorithm that searches it using superposition EXTREMELY fast O(sqrt(n))

It works by creating a correlation between the state space or basically all of the qubits are correlated together. Then you start rotating all of the qubits. Since they are tied to a function which tells you if they are equivalent to your target variable then this requires sqrt n operations to do so. They correlate this by doing a "uniform superposition" or entangling all of the elements and then you start rotating the state space to get what you want.

Thats why we want these things. They are just faster. They break rules but they are hard to build.


Fresh book recommendations delivered straight to your inbox every Thursday.