Geoffrey Hinton

Computer

Birthday December 6, 1947

Birth Sign Sagittarius

Birthplace Wimbledon, London, England

Age 76 years old

Nationality London, England

#5479 Most Popular

1947

Geoffrey Everest Hinton (born 6 December 1947) is a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks.

1970

After repeatedly changing his degree between different subjects like natural sciences, history of art, and philosophy, he eventually graduated in 1970 with a Bachelor of Arts in experimental psychology.

Reverse-mode automatic differentiation, of which backpropagation is a special case, was proposed by Seppo Linnainmaa in 1970, and Paul Werbos proposed to use it to train neural networks in 1974.

During the same period, Hinton co-invented Boltzmann machines with David Ackley and Terry Sejnowski.

His other contributions to neural network research include distributed representations, time delay neural network, mixtures of experts, Helmholtz machines and Product of Experts.

1978

He continued his study at the University of Edinburgh where he was awarded a PhD in artificial intelligence in 1978 for research supervised by Christopher Longuet-Higgins.

After his PhD, Hinton worked at the University of Sussex and, after difficulty finding funding in Britain, the University of California, San Diego and Carnegie Mellon University.

He was the founding director of the Gatsby Charitable Foundation Computational Neuroscience Unit at University College London.

He a professor in the computer science department at the University of Toronto.

He holds a Canada Research Chair in Machine Learning and is currently an advisor for the Learning in Machines & Brains program at the Canadian Institute for Advanced Research.

1986

With David Rumelhart and Ronald J. Williams, Hinton was co-author of a highly cited paper published in 1986 that popularised the backpropagation algorithm for training multi-layer neural networks, although they were not the first to propose the approach.

Hinton is viewed as a leading figure in the deep learning community.

1992

An accessible introduction to Geoffrey Hinton's research can be found in his articles in Scientific American in September 1992 and October 1993.

1998

Hinton was elected a Fellow of the Royal Society (FRS) in 1998.

2001

He was the first winner of the Rumelhart Prize in 2001.

His certificate of election for the Royal Society reads:

2007

In 2007, Hinton coauthored an unsupervised learning paper titled Unsupervised learning of image transformations.

2012

The dramatic image-recognition milestone of the AlexNet designed in collaboration with his students Alex Krizhevsky and Ilya Sutskever for the ImageNet challenge 2012 was a breakthrough in the field of computer vision.

Hinton taught a free online course on Neural Networks on the education platform Coursera in 2012.

2013

From 2013 to 2023, he divided his time working for Google (Google Brain) and the University of Toronto, before publicly announcing his departure from Google in May 2023, citing concerns about the risks of artificial intelligence (AI) technology.

He joined Google in March 2013 when his company, DNNresearch Inc., was acquired, and was at that time planning to "divide his time between his university research and his work at Google".

Hinton's research concerns ways of using neural networks for machine learning, memory, perception, and symbol processing.

He has written or co-written more than 200 peer reviewed publications.

At the Conference on Neural Information Processing Systems (NeurIPS) he introduced a new learning algorithm for neural networks that he calls the "Forward-Forward" algorithm.

The idea of the new algorithm is to replace the traditional forward-backward passes of backpropagation with two forward passes, one with positive (i.e. real) data and the other with negative data that could be generated solely by the network.

While Hinton was a postdoc at UC San Diego, David E. Rumelhart and Hinton and Ronald J. Williams applied the backpropagation algorithm to multi-layer neural networks.

Their experiments showed that such networks can learn useful internal representations of data.

2017

In 2017, he co-founded and became the chief scientific advisor of the Vector Institute in Toronto.

In October and November 2017 respectively, Hinton published two open access research papers on the theme of capsule neural networks, which according to Hinton, are "finally something that works well".

In May 2023, Hinton publicly announced his resignation from Google.

He explained his decision by saying that he wanted to "freely speak out about the risks of A.I."

and added that a part of him now regrets his life's work.

Notable former PhD students and postdoctoral researchers from his group include Peter Dayan, Sam Roweis, Max Welling, Richard Zemel, Brendan Frey, Radford M. Neal, Yee Whye Teh, Ruslan Salakhutdinov, Ilya Sutskever, Yann LeCun, Alex Graves, and Zoubin Ghahramani.

2018

Hinton received the 2018 Turing Award, often referred to as the "Nobel Prize of Computing", together with Yoshua Bengio and Yann LeCun, for their work on deep learning.

They are sometimes referred to as the "Godfathers of Deep Learning", and have continued to give public talks together.

In May 2023, Hinton announced his resignation from Google to be able to "freely speak out about the risks of A.I."

He has voiced concerns about deliberate misuse by malicious actors, technological unemployment, and existential risk from artificial general intelligence.

Hinton was educated at Clifton College in Bristol and King's College, Cambridge.

In a 2018 interview, Hinton said that "David E. Rumelhart came up with the basic idea of backpropagation, so it's his invention".

Although this work was important in popularising backpropagation, it was not the first to suggest the approach.