a story lives forever
Sign in
Form submission failed!

Stay signed in

Recover your password?
Form submission failed!

Web of Stories Ltd would like to keep you informed about our products and services.

Please tick here if you would like us to keep you informed about our products and services.

I have read and accepted the Terms & Conditions.

Please note: Your email and any private information provided at registration will not be passed on to other individuals or organisations without your specific approval.

Video URL

You must be registered to use this feature. Sign in or register.


Learning machine theories after SNARC


Show and tell: My neural network machine
Marvin Minsky Scientist
Comments (0) Please sign in or register to add comments

We designed this thing and there were... the machine was a rack of 40 of these. So it was about as big as a grand piano and full of racks of equipment. And here is a machine that has a memory, which is the probability that if a signal comes into one of these inputs, another signal will come out the output. And the probability that will happen goes from zero – if this volume control is turned down – to one if it’s turned all the way up. And then, there’s a certain probability that the signal will get through. If the signal gets through, this capacitor remembers that for a few seconds. So that’s... that's the short term memory of the neuron. And then, if you reward the thing for what it’s done... there are 40 neurons, maybe 20 of them conducted impulses and something happened which you liked, typically. Then, you would press a button to reward the... this animal, which is the size of a grand piano. And a big motor starts and there... there’s a chain that goes to all 40 of these potentiometers. And... but in between that, there’s a magnetic clutch. So, if this conduct... if this neuron actually transmitted an impulse and this capacitor remembers it, then this clutch will be engaged. And when the big chain moves through all of these things, then the ones that have recently fired will... will move a little bit. And the amount that it moves will depend on how long ago it fired because the charge on this capacitor is time dependent and... once you’ve charged the capacitor, the current goes through this resistor and... and drains out. So, this is a little short term memory of what recently happened and this is the long term memory. So, I could make a simulated rat out of 40 of these. It had a big plug board and you just wire them all in random... this thing connected to that and so forth. And it could learn. It could learn and we built a maze which was a copy of a maze that Shannon had built. Shannon built a machine that learnt to run through a maze not using a nervous system, but using relays. And I sort of copied that and this one also could learn slowly. But it...  it learnt some things and it couldn’t learn some other things. And no sooner was the machine finished and we ran some experiments with it then I concluded that this theory wasn’t good enough.

Marvin Minsky (1927-2016) was one of the pioneers of the field of Artificial Intelligence, founding the MIT AI lab in 1970. He also made many contributions to the fields of mathematics, cognitive psychology, robotics, optics and computational linguistics. Since the 1950s, he had been attempting to define and explain human cognition, the ideas of which can be found in his two books, The Emotion Machine and The Society of Mind. His many inventions include the first confocal scanning microscope, the first neural network simulator (SNARC) and the first LOGO 'turtle'.

Listeners: Christopher Sykes

Christopher Sykes is a London-based television producer and director who has made a number of documentary films for BBC TV, Channel 4 and PBS.

Tags: Claude Shannon

Duration: 3 minutes, 22 seconds

Date story recorded: 29-31 Jan 2011

Date story went live: 13 May 2011