Neural and non-neural AI, reasoning, transformers and LSTMs

Neural and non-neural AI, reasoning, transformers and LSTMs

HomeMachine Learning Street TalkNeural and non-neural AI, reasoning, transformers and LSTMs
Neural and non-neural AI, reasoning, transformers and LSTMs
ChannelPublish DateThumbnail & View CountDownload Video
Channel AvatarPublish Date not found Thumbnail
0 Views
Jürgen Schmidhuber, the father of generative AI, shares his groundbreaking work in deep learning and artificial intelligence. In this exclusive interview, he discusses the history of AI, some of his contributions to the field, and his vision for the future of intelligent machines. Schmidhuber offers unique insights into the exponential growth of technology and the potential impact of AI on humanity and the universe.

MLST is sponsored by Brave:
The Brave Search API powers over 20 billion web pages, built from the ground up without Big Tech bias or the recent outrageous price increases on search API access. Perfect for AI model training and retrieval augmented generation. Try it now – get 2,000 free searches per month at http://brave.com/api.

Table of contents
00:00:00 Introduction
00:03:38 Reasoning
00:13:09 Potential breakthroughs in AI reduce the need for computing power
00:20:39 Memorization vs. Generalization in AI
00:25:19 Tackling the ARC Challenge
00:29:10 Perceptions of Chat GPT and AGI
00:58:45 Summary Principles of Jurgens approach
01:04:17 Analogical reasoning and compression
01:05:48 Breakthroughs in 1991: The P, the G, and the T in ChatGPT and Generative AI
01:15:50 Use of LSTM in language models by tech giants
01:21:08 Neural Network Aspect Ratio Theory
01:26:53 Reinforcement Learning without explicit teachers

References:
/"Annotated History of Modern AI and Deep Learning/" (2022 survey by Schmidhuber):
Chain rule for backward credit allocation (Leibniz, 1676)
First neural network / linear regression / shallow learning (Gauss & Legendre, circa 1800)
First 20th century pioneer of practical AI (Quevedo, 1914)
First recurring NN (RNN) architecture (Lenz, Ising, 1920-1925)
AI theory: fundamental limitations of computation and computation-based AI (Gödel, 1931-34)
Unpublished ideas on the development of RNNs (Turing, 1948)
Multi-layer feedforward NN without deep learning (Rosenblatt, 1958)
First published learning RNNs (Amari et al., ~1972)
First Deep Learning (Ivakhnenko & Lapa, 1965)
Deep learning by stochastic gradient descent (Amari, 1967-68)
ReLUs (Fukushima, 1969)
Backpropagation (Linnainmaa, 1970); precursor (Kelley, 1960)
Backpropagation for NNs (Werbos, 1982)
First deep convolutional NN (Fukushima, 1979); later combined with Backprop (Waibel 1987, Zhang 1988).
Metalearning or learning to learn (Schmidhuber, 1987)
Generative Adversarial Networks / Artificial Curiosity / NN Online Planners (Schmidhuber, February 1990; see the G in Generative AI and ChatGPT)
NNs learn to generate subgoals and operate on command (Schmidhuber, April 1990)
Learning to Program NNs NNs: Unnormalized Linear Transformer (Schmidhuber, March 1991; see the T in ChatGPT)
Deep Learning by Self-Supervised Pre-Training. Distilling NNs (Schmidhuber, April 1991; see the P in ChatGPT)
Experiments with pre-training; analysis of vanishing/exploding gradients, roots of long-term memory / highway networks / ResNets (Hochreiter, June 1991, further developed 1999-2015 with other students of Schmidhuber)
LSTM journal article (1997, most cited AI article of the 20th century)
xLSTM (Hochreiter, 2024)
Reinforcement Learning Prompt Engineer for abstract reasoning and planning (Schmidhuber 2015)
Mindstorms in Natural Language-Based Societies of the Mind (2023 paper by Schmidhuber's team)
https://arxiv.org/abs/2305.17066
Bremermann's physical computation limit (1982)

EXTERNAL LINKS
CogX 2018 – Professor Juergen Schmidhuber
https://www.youtube.com/watch?v=17shdT9-wuA
Discovery of neural networks with low Kolmogorov complexity and high generalization ability (Neural Networks, 1997)
https://sferics.idsia.ch/pub/juergen/loconet.pdf
The Paradox at the Heart of Mathematics: Gödel's Incompleteness Theorem – Marcus du Sautoy
https://www.youtube.com/watch?v=I4pQbo5MQOs
The Philosophy of Science – Hilary Putnam & Bryan Magee (1977)
https://www.youtube.com/watch?v=JJB2q8ufAgk
Optimal organized problem solver
https://arxiv.org/abs/cs/0207097
Levin's 1973 Universal Search
https://rjlipton.com/2011/03/14/levins-great-discoveries/
https://people.idsia.ch/~juergen/optimalsearch.html
About learning to think
https://arxiv.org/abs/1511.09249
Mindstorms in Natural Language-Based Societies of the Mind

Support your dynamic neural networks
https://www.bioinf.jku.at/publications/older/3804.pdf
Evolutionary principles in self-referential learning
https://people.idsia.ch/~juergen/diploma1987ocr.pdf
Hans-Joachim Bremermann
https://en.wikipedia.org/wiki/Bremermann%27s_limit
Highway networks
https://arxiv.org/abs/1505.00387
https://people.idsia.ch/~juergen/highway-networks.html
The Principles of Deep Learning Theory
https://amzn.to/3WJtPaj
Understanding Deep Learning https://amzn.to/4doDk63
Discovering Problem Solutions with Low Kolmogorov Complexity and High Generalizability (ICML 1995)
https://sferics.idsia.ch/pub/juergen/icmlkolmogorov.pdf
/"History of modern AI and deep learning/":
https://people.idsia.ch/~juergen/deep-learning-history.html

Please feel free to share this video with your friends and family if you found it useful.