a story lives forever
Sign in
Form submission failed!

Stay signed in

Recover your password?
Form submission failed!

Web of Stories Ltd would like to keep you informed about our products and services.

Please tick here if you would like us to keep you informed about our products and services.

I have read and accepted the Terms & Conditions.

Please note: Your email and any private information provided at registration will not be passed on to other individuals or organisations without your specific approval.

Video URL

You must be registered to use this feature. Sign in or register.


Doodling during seminars at the Control Systems Laboratory


Trying to make a reliable computer out of unreliable parts
Murray Gell-Mann Scientist
Comments (0) Please sign in or register to add comments

Keith and I worked on a problem of what to do to make a computer that was reliable out of extremely unreliable elements. It was a very interesting problem, although the practical motivation disappeared after a while because at that time, we were thinking in terms of very unreliable vacuum tubes and, of course, they were succeeded by extremely reliable transistors. We didn't know that. But at that time, it seemed to be interesting to look at how to improve the performance of a computer made of very unreliable parts. And, of course, it was a very general problem, and so what we did was to purify the outputs by having a majority voting unit so that we would do each problem three times and have a majority vote. And we'd do that over and over and over again, and the assumption was that... we... we made the assumption of going to the extreme, so that the probability of a correct functioning of the individual unit was 50% plus epsilon, where epsilon is very small, and the probability of its doing it wrong was 50% minus epsilon. Then we did an expansion in epsilon and... the... then we connected the various majority voting units in a sort of random pattern. We were trying to prove that doing that we would get an exponential correction in the... exponentially improving correction, as we put in more and more and more of these units. So that we could take individual elements that were very unreliable and make a reliable computer out of them.

Well, John von Neumann came through as our consultant. I'd never met him before although he was at the institute. I'd seen him but I'd never actually interacted with him before, and he was paid by the Control Systems Laboratory to help Keith and me for a day or so. And he was very impressive solving the cubic equation in his head in an expansion in epsilon and so on for the majority voter and all that, but it was mainly fairly mechanical help that he gave. He endorsed the idea of doing an exponential, but he didn't really supply a proof that it would work. We were looking for a rather rigorous proof. Well, years later in a very famous lecture at Caltech which was published – before I got to Caltech by the way – von Neumann repeated all this; asserted that the exponential improvement would be achieved with this random method, and footnoted Brueckner and me for the majority voter – not for the general idea. And, of course, Sid Dancoff had some responsibility for the general idea, too, because he assigned the problem. Well, I was so flattered to be mentioned in a footnote by John von Neumann that it didn't occur to me that he hadn't actually credited us with what we were doing.

New York-born physicist Murray Gell-Mann (1929-2019) was known for his creation of the eightfold way, an ordering system for subatomic particles, comparable to the periodic table. His discovery of the omega-minus particle filled a gap in the system, brought the theory wide acceptance and led to Gell-Mann's winning the Nobel Prize in Physics in 1969.

Listeners: Geoffrey West

Geoffrey West is a Staff Member, Fellow, and Program Manager for High Energy Physics at Los Alamos National Laboratory. He is also a member of The Santa Fe Institute. He is a native of England and was educated at Cambridge University (B.A. 1961). He received his Ph.D. from Stanford University in 1966 followed by post-doctoral appointments at Cornell and Harvard Universities. He returned to Stanford as a faculty member in 1970. He left to build and lead the Theoretical High Energy Physics Group at Los Alamos. He has numerous scientific publications including the editing of three books. His primary interest has been in fundamental questions in Physics, especially those concerning the elementary particles and their interactions. His long-term fascination in general scaling phenomena grew out of his work on scaling in quantum chromodynamics and the unification of all forces of nature. In 1996 this evolved into the highly productive collaboration with James Brown and Brian Enquist on the origin of allometric scaling laws in biology and the development of realistic quantitative models that analyse the influence of size on the structural and functional design of organisms.

Tags: Control Systems Laboratory, Caltech, Keith Brueckner, John von Neumann, Sid Dancoff

Duration: 3 minutes, 16 seconds

Date story recorded: October 1997

Date story went live: 24 January 2008