Monday, 15 December 2014

Microsoft’s Quantum Mechanics
Can an aging corporation’s adventures in fundamental physics research open a new era of unimaginably powerful computers?

 In 2012, physicists in the Netherlands announced a discovery in particle physics that started chatter about a Nobel Prize. Inside a tiny rod of semiconductor crystal chilled cooler than outer space, they had caught the first glimpse of a strange particle called the Majorana fermion, finally confirming a prediction made in 1937. It was an advance seemingly unrelated to the challenges of selling office productivity software or competing with Amazon in cloud computing, but Craig Mundie, then heading Microsoft’s technology and research strategy, was delighted. The abstruse discovery—partly underwritten by Microsoft—was crucial to a project at the company aimed at making it possible to build immensely powerful computers that crunch data using quantum physics. “It was a pivotal moment,” says Mundie. “This research was guiding us toward a way of realizing one of these systems.”
Microsoft is now almost a decade into that project and has just begun to talk publicly about it. If it succeeds, the world could change dramatically. Since the physicist Richard Feynman first suggested the idea of a quantum computer in 1982, theorists have proved that such a machine could solve problems that would take the fastest conventional computers hundreds of millions of years or longer. Quantum computers might, for example, give researchers better tools to design novel medicines or super-efficient solar cells. They could revolutionize artificial intelligence.
Progress toward that computational nirvana has been slow because no one has been able to make a reliable enough version of the basic building block of a quantum computer: a quantum bit, or qubit, which uses quantum effects to encode data. Academic and government researchers and corporate labs at IBM and Hewlett-Packard have all built them. Small numbers have been wired together, and the resulting devices are improving. But no one can control the physics well enough for these qubits to serve as the basis of a practical general-purpose computer.
Microsoft has yet to even build a qubit. But in the kind of paradox that can be expected in the realm of quantum physics, it may also be closer than anyone else to making quantum computers practical. The company is developing a new kind of qubit, known as a topological qubit, based largely on that 2012 discovery in the Netherlands. There’s good reason to believe this design will be immune from the flakiness plaguing existing qubits. It will be better suited to mass production, too. “What we’re doing is analogous to setting out to make the first transistor,” says Peter Lee, Microsoft’s head of research. His company is also working on how the circuits of a computer made with topological qubits might be designed and controlled. And Microsoft researchers working on algorithms for quantum computers have shown that a machine made up of only hundreds of qubits could run chemistry simulations beyond the capacity of any existing supercomputer.
In the next year or so, physics labs supported by Microsoft will begin testing crucial pieces of its qubit design, following a blueprint developed by an outdoorsy math genius. If those tests work out, a corporation widely thought to be stuck in computing’s past may unlock its future.
Stranger still: a physicist at the fabled but faded Bell Labs might get there first.
Tied Up in Knots
In a sunny room 100 yards from the Pacific Ocean, Michael Freedman, the instigator and technical mastermind of Microsoft’s project, admits to feeling inferior. “When you start thinking about quantum computing, you realize that you yourself are some kind of clunky chemical analog computer,” he says. Freedman, who is 63, is director of Station Q, the Microsoft research group that leads the effort to create a topological qubit, working from a dozen or so offices on the campus of the University of California, Santa Barbara. Fit and tanned, he has dust on his shoes from walking down a beach path to lunch.
If his mind is a clunky chemical computer, it is an extraordinary one. A mathematical prodigy who entered UC Berkeley at the age of 16 and grad school two years later, Freedman was 30 when he solved a version of one of the longest-standing problems in mathematics, the PoincarĂ© conjecture. He worked it out without writing anything down, visualizing the distortion of four-dimensional shapes in his head. “I had seen my way through the argument,” Freedman recalls. When he translated that inner vision into a 95-page proof, it earned the Fields Medal, the highest honor in mathematics.

That cemented Freedman’s standing as a leading light in topology, the discipline concerned with properties of shapes that don’t change when those shapes are distorted. (An old joke has it that topologists can’t distinguish a coffee cup from a doughnut—both are surfaces punctured by a single hole.) But he was drawn into physics in 1988 after a colleague discovered a connection between some of the math describing the topology of knots and a theory explaining certain quantum phenomena. “It was a beautiful thing,” says Freedman. He immediately saw that this connection could allow a machine governed by that same quantum physics to solve problems too hard for conventional computers. Ignorant that the concept of quantum computing already existed, he had independently reinvented it.
Freedman kept working on that idea, and in 1997 he joined Microsoft’s research group on theoretical math. Soon after, he teamed up with a Russian theoretical physicist, Alexei Kitaev, who had proved that a “topological qubit” formed by the same physics could be much more reliable than qubits that other groups were building. Freedman eventually began to feel he was onto something that deserved attention beyond his rarefied world of deep math and physics. In 2004, he showed up at Craig Mundie’s office and announced that he saw a way to build a qubit dependable enough to scale up. “I ended up sort of making a pitch,” says Freedman. “It looked like if you wanted to start to build the technology, you could.”
Mundie bought it. Though Microsoft hadn’t been trying to develop quantum computers, he knew about their remarkable potential and the slow progress that had been made toward building them. “I was immediately fascinated by the idea that maybe there was a completely different approach,” he says. “Such a form of computing would probably turn out to be the basis of a transformation akin to what classical computing has done for the planet in the last 60 years.” He set up an effort to create the topological qubit, with a slightly nervous Freedman at the helm. “Never in my life had I even built a transistor radio,” Freedman says.
Distant Dream
In some ways, a quantum computer wouldn’t be so different from a conventional one. Both deal in bits of data represented in binary form. And both types of machine are made up of basic units that represent bits by flipping between different states like a switch. In a conventional computer, every tiny transistor on a chip can be flipped either off to signify a0 or on for a 1. But because of the quirky rules of quantum physics, which govern the behavior of matter and energy at extremely tiny scales, qubits can perform tricks that make them exceedingly powerful. A qubit can enter a quantum state known as superposition, which effectively represents 0 and 1 at the same time. Once in a superposition state, qubits can become linked, or “entangled,” in a way that means any operation affecting one instantly changes the fate of another. Because of superposition and entanglement, a single operation in a quantum computer can execute parts of a calculation that would take many, many more operations for an equivalent number of ordinary bits. A quantum computer can essentially explore a huge number of possible computational pathways in parallel. For some types of problems, a quantum computer’s advantage over a conventional one grows exponentially with the amount of data to be crunched. “Their power is still an amazement to me,” says Raymond Laflamme, executive director of the Institute for Quantum Computing at the University of Waterloo, in Ontario. “They change the foundation of computer science and what we mean by what is computable.”
In the next year or so, physics labs supported by Microsoft will begin testing its qubit design.
But pure quantum states are very fragile and can be observed and controlled only in carefully contrived circumstances. For a superposition to be stable, the qubit must be shielded from seemingly trivial noise such as random bumping from subatomic particles or faint electrical fields from nearby electronics. The two best current qubit technologies represent bits in the magnetic properties of individual charged atoms trapped in magnetic fields or as the tiny current inside circuits of superconducting metal. They can preserve superpositions for no longer than fractions of a second before they collapse in a process known as decoherence. The largest number of qubits that have been operated together is just seven.
Since 2009, Google has been testing a machine marketed by the startup D-Wave Systems as the world’s first commercial quantum computer, and in 2013 it bought a version of the machine that has 512 qubits. But those qubits are hard-wired into a circuit for a particular algorithm, limiting the range of problems they can work on. If successful, this approach would create the quantum-computing equivalent of a pair of pliers—a useful tool suited to only some tasks. The conventional approach being pursued by Microsoft offers a fully programmable computer—the equivalent of a full toolbox. And besides, independent researchers have been unable to confirm that D-Wave’s machine truly functions as a quantum computer. Google recently started its own hardware lab to try to create a version of the technology that delivers.
The search for ways to fight decoherence and the errors it introduces into calculations has come to dominate the field of quantum computing. For a qubit to truly be scalable, it would probably need to accidentally decohere only around once in a million operations, says Chris Monroe, a professor at the University of Maryland and co-leader of a quantum computing project funded by the Department of Defense and the Intelligence Advanced Research Projects Activity. Today the best qubits typically decohere thousands of times that often.
Microsoft’s Station Q might have a better approach. The quantum states that lured Freedman into physics—which occur when electrons are trapped in a plane inside certain materials—should provide the stability that a qubit builder craves, because they are naturally deaf to much of the noise that destabilizes conventional qubits. Inside these materials, electrons take on strange properties at temperatures close to absolute zero, forming what are known as electron liquids. The collective quantum properties of the electron liquids can be used to signify a bit. The elegance of the design, along with grants of cash, equipment, and computing time, has lured some of the world’s leading physics researchers to collaborate with Microsoft. (The company won’t say what fraction of its $11 billion annual R&D spending goes to the project.)
The catch is that the physics remains unproven. To use the quantum properties of electron liquids as bits, researchers would have to manipulate certain particles inside them, known as non-Abelian anyons, so that they loop around one another. And while physicists expect that non–Abelian anyons exist, none have been conclusively detected.
Majorana particles, the kind of non-Abelian anyons that Station Q and its collaborators seek, are particularly elusive. First predicted by the reclusive Italian physicist Ettore Majorana in 1937, not long before he mysteriously disappeared, they have captivated physicists for decades because they have the unique property of being their own antiparticles, so if two ever meet, they annihilate each other in a flash of energy.
No one had reported credible evidence that they existed until 2012, when Leo Kouwenhoven at Delft University of Technology in the Netherlands, who had gotten funding and guidance from Microsoft, announced that he had found them inside nanowires made from the semiconductor indium antimonide. He had coaxed the right kind of electron liquid into existence by connecting the nanowire to a chunk of superconducting electrode at one end and an ordinary one at the other. It offered the strongest support yet for Microsoft’s design. “The finding has given us tremendous confidence that we’re really onto something,” says Microsoft’s Lee. Kouwenhoven’s group and other labs are now trying to refine the results of the experiment and show that the particles can be manipulated. To speed progress and set the stage for possible mass production, Microsoft has begun working with industrial companies to secure supplies of semiconductor nanowires and the superconducting electronics that would be needed to control a topological qubit.
For all that, Microsoft doesn’t yet have its qubit. A way must be found to move Majorana particles around one another in the operation needed to write the equivalent of 0s and 1s. Materials scientists at the Niels Bohr Institute in Copenhagen recently found a way to build nanowires with side branches, which could allow one particle to duck to the side while another passes. Charlie Marcus, a researcher there who has worked with Microsoft since its first design, is now preparing to build a working system with the new wires. “I would say that is going to keep us busy for the next year,” he says.
Success would validate Microsoft’s qubit design and put an end to recent suggestions that Kouwenhoven may not have detected the Majorana particle in 2012 after all. But John Preskill, a professor of theoretical physics at Caltech, says the topological qubit remains nothing more than a nice theory. “I’m very fond of the idea, but after some years of serious effort there’s still no firm evidence,” he says.
Competitive Physics
At Bell Labs in New Jersey, Bob Willett says he has seen the evidence. He peers over his glasses at a dull black crystal rectangle the size of a fingertip. It has hand-soldered wires around its edges and fine zigzags of aluminum on its surface. And in the middle of the chip, in an area less than a micrometer across, Willett reports detecting non-Abelian anyons. If he is right, Willett is farther along than anyone who is working with Microsoft. And in his series of small, careworn labs, he is now preparing to build what—if it works—will be the world’s first topological qubit. “We’re making the transition from the science to the technology now,” he says. His effort has historical echoes. Down the corridor from his labs is a glass display case with the first transistor inside, made on this site in 1947.
Willett’s device is a version of a design that Microsoft has mostly given up on. By the time the company’s project began, Freedman and his collaborators had determined that it should be possible to build a topological qubit using crystals of ultrapure gallium arsenide that trap electrons. But in four years of experiments, the physics labs supported by Microsoft didn’t find conclusive evidence of non-Abelian anyons. Willett had worked on similar physics for years, and after reading a paper of Freedman’s on the design, he decided to have a go himself. In a series of papers published between 2009 and 2013, he reported finding those crucial particles in his own crystal-based devices. When one crystal is cooled with liquid helium to less than 1 Kelvin (−272.15 °C) and subjected to a magnetic field, an electron liquid forms at its center. Willett uses electrodes to stream the particles around its edge; if they are non-Abelian anyons looping around their counterparts in the center, they should change the topological state of the electron liquid as a whole. He has published results from several different experiments in which he saw telltale wobbles, which theorists had predicted, in the current of those flowing particles. He’s now moved on to building a qubit design. It is not much more complex than his first experiment: just two of the same circuits placed back to back on the same crystal, with extra electrodes that link electron liquids and can encode and read out quantum states that represent the equivalent of 0s and 1s.
Willett hopes that device will squelch skepticism about his results, which no one else has been able to replicate. Microsoft’s collaborator Charlie Marcus says Willett “saw signals that we didn’t see.” Willett counters that Marcus and others have made their devices too large and used crystals with important differences in their properties. He says he recently confirmed that by testing some devices made to the specifications used by other researchers. “Having worked with the materials they’re working with, I can see why they stopped doing it, because it is a pain in the ass,” he says 
Bell Labs, now owned by the French telecommunications company Alcatel-Lucent, is smaller and poorer than it was back when AT&T, unchallenged as the American telephone monopoly, let many researchers do pretty much anything they desired. Some of Willett’s rooms overlook the dusty, scarred ground left when an entire wing of the lab was demolished this year. But with fewer people around than the labs had long ago, it’s easier to get access to the equipment he needs, he says. And Alcatel has begun to invest more in his project. Willett used to work with just three other physicists, but recently he began collaborating with mathematicians and optics experts too. Bell Labs management has been asking about the kinds of problems that might be solved with a small number of qubits. “It’s expanding into a relatively big effort,” he says.
Willett sees himself as an academic colleague of the Microsoft researchers rather than a corporate competitor, and he still gets invited to Freedman’s twice-yearly symposiums that bring Microsoft collaborators and other leading physicists to Santa Barbara. But Microsoft management has been more evident at recent meetings, Willett says, and he has sometimes felt that his being from another corporation made things awkward.
It would be more than just awkward if Willett beat Microsoft to proving that the idea it has championed can work. For Microsoft to open up a practical route to quantum computing would be surprising. For the withered Bell Labs, owned by a company not even in the computing business, it would be astounding.
Quantum Code
On Microsoft’s leafy campus in Redmond, Washington, thousands of software engineers toil to fix bugs and add features to Windows and Microsoft Office. Tourists pose in the company museum for photos with a life-size cutout of a 1978 Bill Gates and his first employees. In the main research building, Krysta Svore leads a dozen people working on software for computers that may never exist. The team is figuring out what the first generation of quantum computers could do for us.
The group was established because although quantum computers would be powerful, they cannot solve every problem. And only a handful of quantum algorithms have been developed in enough detail to suggest that they could be practical on real hardware. “Quantum computing is possibly very disruptive, but we need to understand where the power is,” Svore says.
“We believe that there’s a chance to do something that could be the foundation of a whole new economy.”
No quantum computer is ever going to fit into your pocket, because of the way qubits need to be supercooled (unless, of course, someone uses a quantum computer to design a better qubit). Rather, they would be used like data centers or supercomputers to power services over the Internet, or to solve problems that allow other technologies to be improved. One promising idea is to use quantum computers for superpowered chemistry simulations that could accelerate progress on major problems in areas such as health or energy. A quantum computer could simulate reality so precisely that it could replace years of plodding lab work, says Svore. Today roughly a third of U.S. supercomputer time is dedicated to simulations for chemistry or materials science, according to the Department of Energy. Svore’s group has developed an algorithm that would let even a first-generation quantum computer tackle much more complex problems, such as virtually testing a catalyst for removing carbon dioxide from the atmosphere, in just hours or minutes. “It’s a potential killer application of quantum computers,” she says.
But it’s possible to envision countless other killer applications. Svore’s group has produced some of the first evidence that quantum computers can be used for machine learning, a technology increasingly central to Microsoft and its rivals. Recent advances in image and speech recognition have triggered a frenzy of new research in artificial intelligence. But they rely on clusters of thousands of computers working together, and the results still lag far behind human capabilities. Quantum computers might overcome the technology’s limitations.
Work like that helps explain how the first company to build a quantum computer might gain an advantage virtually unprecedented in the history of technology. “We believe that there’s a chance to do something that could be the foundation of a whole new economy,” says Microsoft’s Peter Lee. As you would expect, he and all the others working on quantum hardware say they are optimistic. But with so much still to do, the prize feels as distant as ever. It’s as if qubit technology is in a superposition between changing the world and decohering into nothing more than a series of obscure research papers. That’s the kind of imponderable that people working on quantum technology have to handle every day. But with a payoff so big, who can blame them for taking a whack at it?




“How Do People Get New Ideas?”

ON CREATIVITY
How do people get new ideas?
Presumably, the process of creativity, whatever it is, is essentially the same in all its branches and varieties, so that the evolution of a new art form, a new gadget, a new scientific principle, all involve common factors. We are most interested in the “creation” of a new scientific principle or a new application of an old one, but we can be general here.
One way of investigating the problem is to consider the great ideas of the past and see just how they were generated. Unfortunately, the method of generation is never clear even to the “generators” themselves.
But what if the same earth-shaking idea occurred to two men, simultaneously and independently? Perhaps, the common factors involved would be illuminating. Consider the theory of evolution by natural selection, independently created by Charles Darwin and Alfred Wallace.
There is a great deal in common there. Both traveled to far places, observing strange species of plants and animals and the manner in which they varied from place to place. Both were keenly interested in finding an explanation for this, and both failed until each happened to read Malthus’s “Essay on Population.”
Both then saw how the notion of overpopulation and weeding out (which Malthus had applied to human beings) would fit into the doctrine of evolution by natural selection (if applied to species generally).
Obviously, then, what is needed is not only people with a good background in a particular field, but also people capable of making a connection between item 1 and item 2 which might not ordinarily seem connected.
Undoubtedly in the first half of the 19th century, a great many naturalists had studied the manner in which species were differentiated among themselves. A great many people had read Malthus. Perhaps some both studied species and read Malthus. But what you needed was someone who studied species, read Malthus, and had the ability to make a cross-connection.
That is the crucial point that is the rare characteristic that must be found. Once the cross-connection is made, it becomes obvious. Thomas H. Huxley is supposed to have exclaimed after reading On the Origin of Species, “How stupid of me not to have thought of this.”
But why didn’t he think of it? The history of human thought would make it seem that there is difficulty in thinking of an idea even when all the facts are on the table. Making the cross-connection requires a certain daring. It must, for any cross-connection that does not require daring is performed at once by many and develops not as a “new idea,” but as a mere “corollary of an old idea.”
It is only afterward that a new idea seems reasonable. To begin with, it usually seems unreasonable. It seems the height of unreason to suppose the earth was round instead of flat, or that it moved instead of the sun, or that objects required a force to stop them when in motion, instead of a force to keep them moving, and so on.
A person willing to fly in the face of reason, authority, and common sense must be a person of considerable self-assurance. Since he occurs only rarely, he must seem eccentric (in at least that respect) to the rest of us. A person eccentric in one respect is often eccentric in others.
Consequently, the person who is most likely to get new ideas is a person of good background in the field of interest and one who is unconventional in his habits. (To be a crackpot is not, however, enough in itself.)
Once you have the people you want, the next question is: Do you want to bring them together so that they may discuss the problem mutually, or should you inform each of the problem and allow them to work in isolation?
My feeling is that as far as creativity is concerned, isolation is required. The creative person is, in any case, continually working at it. His mind is shuffling his information at all times, even when he is not conscious of it. (The famous example of Kekule working out the structure of benzene in his sleep is well-known.)
The presence of others can only inhibit this process, since creation is embarrassing. For every new good idea you have, there are a hundred, ten thousand foolish ones, which you naturally do not care to display.
Nevertheless, a meeting of such people may be desirable for reasons other than the act of creation itself.
No two people exactly duplicate each other’s mental stores of items. One person may know A and not B, another may know B and not A, and either knowing A and B, both may get the idea—though not necessarily at once or even soon.
Furthermore, the information may not only be of individual items A and B, but even of combinations such as A-B, which in themselves are not significant. However, if one person mentions the unusual combination of A-B and another the unusual combination A-C, it may well be that the combination A-B-C, which neither has thought of separately, may yield an answer.
It seems to me then that the purpose of cerebration sessions is not to think up new ideas but to educate the participants in facts and fact-combinations, in theories and vagrant thoughts.
But how to persuade creative people to do so? First and foremost, there must be ease, relaxation, and a general sense of permissiveness. The world in general disapproves of creativity, and to be creative in public is particularly bad. Even to speculate in public is rather worrisome. The individuals must, therefore, have the feeling that the others won’t object.
If a single individual present is unsympathetic to the foolishness that would be bound to go on at such a session, the others would freeze. The unsympathetic individual may be a gold mine of information, but the harm he does will more than compensate for that. It seems necessary to me, then, that all people at a session be willing to sound foolish and listen to others sound foolish.
If a single individual present has a much greater reputation than the others, or is more articulate, or has a distinctly more commanding personality, he may well take over the conference and reduce the rest to little more than passive obedience. The individual may himself be extremely useful, but he might as well be put to work solo, for he is neutralizing the rest.
The optimum number of the group would probably not be very high. I should guess that no more than five would be wanted. A larger group might have a larger total supply of information, but there would be the tension of waiting to speak, which can be very frustrating. It would probably be better to have a number of sessions at which the people attending would vary, rather than one session including them all. (This would involve a certain repetition, but even repetition is not in itself undesirable. It is not what people say at these conferences, but what they inspire in each other later on.)
For best purposes, there should be a feeling of informality. Joviality, the use of first names, joking, relaxed kidding are, I think, of the essence—not in themselves, but because they encourage a willingness to be involved in the folly of creativeness. For this purpose I think a meeting in someone’s home or over a dinner table at some restaurant is perhaps more useful than one in a conference room.
Probably more inhibiting than anything else is a feeling of responsibility. The great ideas of the ages have come from people who weren’t paid to have great ideas, but were paid to be teachers or patent clerks or petty officials, or were not paid at all. The great ideas came as side issues.
To feel guilty because one has not earned one’s salary because one has not had a great idea is the surest way, it seems to me, of making it certain that no great idea will come in the next time either.
Yet your company is conducting this cerebration program on government money. To think of congressmen or the general public hearing about scientists fooling around, boondoggling, telling dirty jokes, perhaps, at government expense, is to break into a cold sweat. In fact, the average scientist has enough public conscience not to want to feel he is doing this even if no one finds out.
I would suggest that members at a cerebration session be given sinecure tasks to do—short reports to write, or summaries of their conclusions, or brief answers to suggested problems—and be paid for that, the payment being the fee that would ordinarily be paid for the cerebration session. The cerebration session would then be officially unpaid-for and that, too, would allow considerable relaxation.
I do not think that cerebration sessions can be left unguided. There must be someone in charge who plays a role equivalent to that of a psychoanalyst. A psychoanalyst, as I understand it, by asking the right questions (and except for that interfering as little as possible), gets the patient himself to discuss his past life in such a way as to elicit new understanding of it in his own eyes.
In the same way, a session-arbiter will have to sit there, stirring up the animals, asking the shrewd question, making the necessary comment, bringing them gently back to the point. Since the arbiter will not know which question is shrewd, which comment necessary, and what the point is, his will not be an easy job.
As for “gadgets” designed to elicit creativity, I think these should arise out of the bull sessions themselves. If thoroughly relaxed, free of responsibility, discussing something of interest, and being by nature unconventional, the participants themselves will create devices to stimulate discussion.

Who Owns the Biggest Biotech Discovery of the Century?

There’s a bitter fight over the patents for CRISPR, a breakthrough new form of DNA editing.

Last month in Silicon Valley, biologists Jennifer Doudna and Emmanuelle Charpentier showed up in black gowns to receive the $3 million Breakthrough Prize, a glitzy award put on by Internet billionaires including Mark Zuckerberg. They’d won for developing CRISPR-Cas9, a “powerful and general technology” for editing genomes that’s been hailed as a biotechnology breakthrough.
How did the high-profile prize for CRISPR and the patent on it end up in different hands? That’s a question now at the center of a seething debate over who invented what, and when, that involves three heavily financed startup companies, a half-dozen universities, and thousands of pages of legal documents.
“The intellectual property in this space is pretty complex, to put it nicely,” says Rodger Novak, a former pharmaceutical industry executive who is now CEO ofCRISPR Therapeutics, a startup in Basel, Switzerland, that was cofounded by Charpentier. “Everyone knows there are conflicting claims.”
At stake are rights to an invention that may be the most important new genetic engineering technique since the beginning of the biotechnology age in the 1970s. The CRISPR system, dubbed a “search and replace function” for DNA, lets scientists easily disable genes or change their function by replacing DNA letters. During the last few months, scientists have shown that it’s possible to use CRISPR to rid mice of muscular dystrophy, cure them of a rare liver disease, make human cells immune to HIV, and genetically modify monkeys (see “Genome Surgery” and “10 Breakthrough Technologies 2014: Genome Editing”).
No CRISPR drug yet exists. But if CRISPR turns out to be as important as scientists hope, commercial control over the underlying technology could be worth billions.
The control of the patents is crucial to several startups that together quickly raised more than $80 million to turn CRISPR into cures for devastating diseases. They include Editas Medicine and Intellia Therapeutics, both of Cambridge, Massachusetts. Companies expect that clinical trials could begin in as little as three years.
Zhang cofounded Editas Medicine, and this week the startup announced that it had licensed his patent from the Broad Institute. But Editas doesn’t have CRISPR sewn up. That’s because Doudna, a structural biologist at the University of California, Berkeley, was a cofounder of Editas, too. And since Zhang’s patent came out, she’s broken off with the company, and her intellectual property—in the form of her own pending patent—has been licensed to Intellia,a competing startup unveiled only last month. Making matters still more complicated, Charpentier sold her own rights in the same patent application to CRISPR Therapeutics.
No CRISPR drug yet exists. But if CRISPR turns out to be as important as scientists hope, commercial control over the underlying technology could be worth billions.
In an e-mail, Doudna said she no longer has any involvement with Editas. “I am not part of the company’s team at this point,” she said. Doudna declined to answer further questions, citing the patent dispute.
Few researchers are now willing to discuss the patent fight. Lawsuits are certain and they worry anything they say will be used against them. “The technology has brought a lot of excitement, and there is a lot of pressure, too. What are we going to do? What kind of company do we want?” Charpentier says. “It all sounds very confusing for an outsider, and it’s also quite confusing as an insider.”
Academic labs aren’t waiting for the patent claims to get sorted out. Instead, they are racing to assemble very large engineering teams to perfect and improve the genome-editing technique. On the Boston campus of Harvard’s medical school, for instance, George Church, a specialist in genomics technology, says he now has 30 people in his lab working on it.
Because of all the new research, Zhang says, the importance of any patent, including his own, isn’t entirely clear. “It’s one important piece, but I don’t really pay attention to patents,” he says. “What the final form of this technology is that changes people’s lives may be very different.”
The new gene-editing system was unearthed in bacteria—organisms that use it as a way to identify, and then carve up, the DNA of invading viruses. That work stretched across a decade. Then, in June 2012, a small team led by Doudna and Charpentier published a key paper showing how to turn that natural machinery into a “programmable” editing tool, to cut any DNA strand, at least in a test tube.
The next step was clear—scientists needed to see if the editing magic could work on the genomes of human cells, too. In January 2013, the laboratories of Harvard’s Church and Broad’s Zhang were first to publish papers showing that the answer was yes. Doudna published her own results a few weeks later.
Everyone by then realized that CRISPR might become an immensely flexible way to rewrite DNA, and possibly to treat rare metabolic problems and genetic diseases as diverse as hemophilia and the neurodegenerative disease Huntington’s.
Venture capital groups quickly began trying to recruit the key scientists behind CRISPR, tie up the patents, and form startups. Charpentier threw in with CRISPR Therapeutics in Europe. Doudna had already started a small company, Caribou Biosciences, but in 2013 she joined Zhang and Church as a cofounder of Editas. With $43 million from leading venture funds Third Rock Ventures (see “50 Smartest Companies: Third Rock Ventures”), Polaris Partners, and Flagship Ventures, Editas looked like the dream team of gene-editing startups.
In April of this year, Zhang and the Broad won the first of several sweeping patents that cover using CRISPR in eukaryotes—or any species whose cells contain a nucleus (see “Broad Institute Gets Patent on Revolutionary Gene-Editing Method”). That meant that they’d won the rights to use CRISPR in mice, pigs, cattle, humans—in essence, in every creature other than bacteria.
The patent came as a shock to some. That was because Broad had paid extra to get it reviewed very quickly, in less than six months, and few knew it was coming. Along with the patent came more than 1,000 pages of documents. According to Zhang, Doudna’s predictions in her own earlier patent application that her discovery would work in humans was “mere conjecture” and that, instead, he was the first to show it, in a separate and “surprising” act of invention.
The patent documents have caused consternation. The scientific literature shows that several scientists managed to get CRISPR to work in human cells. In fact, its easy reproducibility in different organisms is the technology’s most exciting hallmark. That would suggest that, in patent terms, it was “obvious” that CRISPR would work in human cells, and that Zhang’s invention might not be worthy of its own patent.
What’s more, there’s scientific credit at stake. In order to show he was “first to invent” the use of CRISPR-Cas in human cells, Zhang supplied snapshots of lab notebooks that he says show he had the system up and running in early 2012, even before Doudna and Charpentier published their results or filed their own patent application. That timeline would mean he hit on the CRISPR-Cas editing system independently. In an interview, Zhang affirmed he’d made the discoveries on his own. Asked what he’d learned from Doudna and Charpentier’s paper, he said “not much.”
Not everyone is convinced. “All I can say is that we did it in my lab with Jennifer Doudna,” says Charpentier, now a professor at the Helmholtz Centre for Infection Research and Hannover Medical School in Germany. “Everything here is very exaggerated because this is one of those unique cases of a technology that people can really pick up easily, and it’s changing researchers’ lives. Things are happening fast, maybe a bit too fast.”
This isn’t the end of the patent fight. Although Broad moved very swiftly, lawyers for Doudna and Charpentier are expected to mount an interference proceeding in the U.S.—that is, a winner-takes-all legal process in which one inventor can take over another’s patent. Who wins will depend on which scientist can produce lab notebooks, e-mails, or documents with the earliest dates.
“I am very confident that the future will clarify the situation,” says Charpentier. “And I would like to believe the story is going to end up well.”

RoboBrain: The World's First Knowledge Engine For Robots

If you have a question, you can ask Google or Bing or any number of online databases. Now robots have their own knowledge database

One of the most exciting changes influencing modern life is the ability to search and interact with information on a scale that has never been possible before. All this is thanks to a convergence of technologies that have resulted in services such as Google Now, Siri, Wikipedia and IBM’s Watson supercomputer.
This gives us answers to a wide range of questions on almost any topic simply by whispering a few words into a smart phone or typing a few characters into a laptop. Part of what makes this possible is that humans are good at coping with ambiguity. So the answer to a simple question such as “how to make cheese on toast” can result in very general instructions that an ordinary person can easily follow.
For robots, the challenge is quite different. These machines require detailed instructions even for the simplest task. For example, a robot asking a search engine “how to bring sweet tea from the kitchen” is unlikely to get the detail it needs to carry out the task since it requires all kinds of incidental knowledge such as the idea that cups can hold liquid (but not when held upside down), that water comes from taps and can be heated in a kettle or microwave, and so on.
The truth is that if robots are ever to get useful knowledge from search engines, these databases will have to contain a much more detailed description of every task that they might need to carry out.
 Enter Ashutosh Saxena at Stanford University in Palo Alto and a number of pals, who have set themselves the task of building such knowledge engine for robots.
These guys have already begun creating a kind of Google for robots that can be freely accessed by any device wishing to carry out a task. At the same time, the database gathers new information about these tasks as robots perform them, thereby learning as it goes.  They call their new knowledge engine RoboBrain.
The team have taken on a number of challenges in designing RoboBrain. For a start, robots have many different types of sensors and designs so the information has to be stored in a way that is useful for any kind of machine. The knowledge engine should be able to respond to a variety of different types of questions posed by robots in different ways. And it should be able to gather knowledge from different sources, such as the World Wide Web and by crawling existing knowledge bases such as WordNet, ImageNet, Freebase and OpenCyc.
What’s more, Saxena and co want Robobrain to be a collaborative effort that links up with existing services. To that end, the team has already partnered with services such as Tell Me Dave, a start-up aiming to allow robots to understand natural language instructions, and PlanIt, a way for robots to plan paths using crowdsourced information.
“As more and more researchers contribute knowledge to RoboBrain, it will not only make their robots perform better but we also believe this will be beneficial for the robotics community at large,” say Saxena and co. They have set up a website called RoboBrain.me to act as a gateway and to promote the idea.
Setting up a knowledge engine of this kind is no easy task. Saxena and co have approached it as a problem of network theory in which the knowledge is represented as a directed graph. The nodes in this graph can be a variety of different things such as an image, text, video, haptic data or a learned concept, such as a “container”.
RoboBrain then accepts new information in the form of a set of edges that link a subset of nodes together. For example, the idea that a “sitting human can use a mug” might link the nodes for mug, cup and sitting human with concepts such as “being able to use”.
Any robot that queries the database for this term, or something like it, can then download the set of edges and nodes that represent it.
This is more than just a neat idea. Saxena and co have already begun to build the database and use it to allow robots to plan certain actions, like navigating indoors or moving cooking ingredients around.
They show how one of their own robots uses RoboBrain to move an egg carton to the other end of a table. Since eggs are fragile, they have to be handled carefully, something that the robot can learn by querying RoboBrain.
An important part of the project is to apply knowledge learned in one situation to other situations. So the same technique for handling eggs could also be used for handling other fragile objects, such as light bulbs.
The team have big plans for the future. For instance, they would like to expand the knowledge base to include even larger knowledge sources, such as online videos. A robot that could query online “how-to” videos could then learn how to perform a wide variety of household tasks.
That’s interesting work with important potential to change the way that robots learn on a grand scale. Online knowledge bases have had a remarkable impact on the way humans think about the world around them and how they interact with it.
It is certainly not beyond belief that RoboBrain might have a similar impact for our electronic cousins.

People Want Safe Communications, Not Usable Cryptograph

For encryption to be widely used, it must be built into attractive, easy-to-use apps like those people already rely on.

Security and privacy expert Micah Lee recently described how he helped set upcryptographically protected communications between whistleblower Edward Snowden and the journalists Glenn Greenwald and Laura Poitras, who would share what he had learned about the NSA’s surveillance programs with the world. Lee’s tale of how the three struggled to master the technology was an urgent reminder of a problem that has bugged me for a while and has implications for anyone who wants to ensure the privacy of personal or professional matters.
The cryptographic software we have today hobbles those who try to use it with Rube Goldberg-machine complexity and academic language as dated as a pair of Jordache jeans. Snowden, Poitras, and Greenwald’s tussles with that problem could conceivably have foiled Snowden’s attempts to communicate safely, leaving the world in the dark about U.S. surveillance practices and their effects on our security and privacy.
Why is encryption software so horrid to use? Because there’s no such thing as usable cryptography, despite growth in popularity of the buzzword “usable crypto” among experts in recent years. Usability and crypto are in fact two separate disciplines. One is about crafting things that people interact with; the other is concerned with technical plumbing that, although crucial, should not be visible to the end user. Unless we find the right balance, consumers will never benefit from crypto.
The cypherpunk dream—where crypto is ubiquitous and everyone speaks code as a second language—never reached fruition because we cryptographers mistook our goal for our consumers’ goal. Johnny can’t encrypt because Johnny never wanted to encrypt. Nobody really wants cryptography in and of itself. What they want is to communicate how, and with whom, they please, but safely.
Cryptographers and the security and privacy community can’t fix this problem by ourselves. Real-world cryptography isn’t only about cryptography. It’s just as much about product design, and building experiences that work for the user—not requiring work from the user. It’s a cross-discipline problem that requires not only cryptographers but user-experience designers and developers, too.
Equivalent problems have been more or less solved in other areas of computing. The e-mail encryption system PGP debuted in 1991, the same year as Linux and the World Wide Web. The last two have evolved to become central to many services and products with hundreds of millions of nonexpert users. But when you try to use PGP or its open-source cousin, GPG, you will find yourself in many ways stuck in 1991—as Snowden and his contacts discovered.
One way we can start to solve this problem is by adapting a common tool in security circles, the security audit, where an application’s vulnerability to attacks is investigated through a variety of technical processes. Recently, campaigners have raised money to fund security audits of critical tools such as the hard-drive encryption software TrueCrypt. I suggest we use the same model to fund user-experience audits of secure communication software, and subject our tools to the kind of user testing that hones the blockbuster apps of leading consumer companies.
We also need to change how we talk to users about cryptographic concepts and security, and to set up places for cross-discipline research into how to craft friendly user experiences underpinned by security and privacy technologies.
Right now, things are bad, but inconsistently promising. The Open WhisperSystems project has made mobile apps for encrypted messaging and calls that appear much like “normal” apps for voice and text, and recently it announced it is helping WhatsApp encrypt its users’ messages. We have new organizations like Simply Secure, which aims to foster the development of usable security and privacy software (and is led by a product designer, not a cryptographer).
However, there aren’t many of these exceptional products or organizations. We’re still way too new at this for our own good—or that of the many people who need ways to stay safe. And our attempts aren’t always successful. The sooner we find ways to deliver good user experience and security together, the more impact the tools we make can have. Because let’s face it, “the masses” aren’t going to sacrifice a good experience for a bad one that includes encryption.