It's not every day you see a research paper named "Alice's Adventures in a Differentiable Wonderland," and when you open it, it's a 281-page book!
I haven't completed it yet, but this amazing work, written by Simone Scardapane, is a fascinating introduction to deep neural networks and differentiable programming.
Some key technical highlights:
• Covers core concepts like automatic differentiation, stochastic optimization, and activation functions in depth
• Explains modern architectures like convolutional networks, transformers, and graph neural networks
• Provides mathematical foundations including linear algebra, gradients, and probability theory
• Discusses implementation details in PyTorch and JAX
• Explores advanced topics like Bayesian neural networks and neural scaling laws
The book takes a unique approach, framing neural networks as compositions of differentiable primitives rather than biological analogs. It provides both theoretical insights and practical coding examples.
I especially enjoyed the sections on:
• Vector-Jacobian products and reverse-mode autodiff • Stochastic gradient descent and mini-batch optimization • ReLU, GELU, and other modern activation functions • Universal approximation capabilities of MLPs
Whether you're new to deep learning or an experienced practitioner, this book offers valuable insights into the fundamentals and latest developments. Highly recommended for anyone working with neural networks!
The TRL v0.13 release is 🔥! My highlight are the new process reward trainer to train models similar to o1 and tool call support:
🧠 Process reward trainer: Enables training of Process-supervised Reward Models (PRMs), which reward the quality of intermediate steps, promoting structured reasoning. Perfect for tasks like stepwise reasoning.
🔀 Model merging: A new callback leverages mergekit to merge models during training, improving performance by blending reference and policy models - optionally pushing merged models to the Hugging Face Hub.
🛠️ Tool call support: TRL preprocessing now supports tool integration, laying the groundwork for agent fine-tuning with examples like dynamic temperature fetching in prompts.
⚖️ Mixture of judges: The new AllTrueJudge combines decisions from multiple binary judges for more nuanced evaluation.