I like to train large deep neural nets too 🧠🤖💥 | First Paper (AutoAgents: A Framework for Automatic Agent Generation) Accepted @ IJCAI 2024 | Role Model Karpathy
Implements from first-principle a discrete flow matching model for code generation- trained a small sized 2D dfm model on two variations of code for binary search. The result was amazing, code in comment: Code: https://github.com/Jaykef/ai-algorithms/blob/main/dfm.ipynb
In Honour of This Year's NeurIPs Test of Time Paper Awardees This year's NIPs Test of Time Paper Awards went to two groundbreaking papers: 1. Generative Adversarial Nets (Goodfellow et al) 2. Sequence to Sequence Learning with Neural Networks (Ilya et al) Let's explore how these papers helped pioneered breakthroughs in today's AI:
Lightweight implementation of the seminal paper “Sequence to Sequence Learning with Neural Networks”
Built, trained and eval a 2 layer deep seq2seq LSTM-based model (~10M params) on German-English corpus of Multi30K dataset. In honor of ilya sutskever et al for winning this year’s NeurIPSConf Test of Time paper award 🫡
Rethinking Backpropagation: Thoughts on What's Wrong with Backpropagation
As a young researcher, I've often pondered the limitations of backpropagation, especially when mapped with how learning occurs in the human brain. While backpropagation has been the workhorse of deep learning, it isn't without flaws. In this post, I aim to share some thoughts on these shortcomings from first principles.
Implements compute-efficient DeepPCR algorithm which parallelizes sequential operations thus speeding up inference and training of neural networks. DeepPCR can significantly reduce the time complexity in operations such as denoising in latent diffusion space from O(L) to O(log2 L).
Here we implement the seminal RNN paper “Generating Text with Recurrent Neural Networks"- we train a character-level multiplicative recurrent neural network model (~250k params) for 1000 epochs with Adam opt on 2pac's "Hit 'em Up", sample was fun lol.
Interesting Work on Reasoning 🤔 - explores a new take on few-shot reasoning while challenging assumptions that program synthesis is necessary for abstract reasoning. - shows test-time training + smart inference tricks can match human-average performance, though at high computational cost. Key insight: proper compute allocation matters more than method (whether symbolic or neural).
It's work like this that in some way signal the eventual “dominance” of AI over all the sciences.
“We train our model on the six-dimensional N-body phase space, predicting particle velocities as the time derivative of the model’s displacement outputs”
The emulator is capable of predicting the nonlinear displacement and velocity fields for 128^3 particles in half a second on a single GPU🤯
Triton nanoGPT now has a custom cross entropy loss kernel 🚀 Next: matmul, gradually overthrowing all major PyTorch ops:)
Simplified pseudo for parallel cross-entropy loss compute: - init program: get pid, compute offsets, load targets. - init row_max and row_sum. - for-loop1 (find max logits): update row_max with max logits. - for-loop2 (compute softmax and loss): compute row_sum, update loss. - add log(row_sum) and store loss.