banner
Nagi-ovo

Nagi-ovo

Breezing
github

深度学习

cover
cover

“速通” PPO

Proximal Policy Optimization 终于到了这几年 NLP 领域中比较火热的 RL 算法之一了 On-Policy 算法中,采集数据用的策略和训练的策略是相同的,这样的问题是数据用一次后就得丢弃,然后再重新采集数据,训练速度很慢。 PPO 背后的直觉  …
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

知识蒸馏入门学习

本文将尝试结合: 入门 Demo:Knowledge Distillation Tutorial — PyTorch Tutorials 进阶学习:MIT 6.5940 Fall 2024 TinyML and Efficient Deep Learning Computing…
cover
cover
cover
cover
cover

Softmax in OpenAI Triton

本文是对 @sotadeeplearningtutorials9598 的 Youtube 教程学习的总结,感谢老师深入浅出的指导让我这个从未接触过 GPU 编程的小白能够编写出第一个有实际效果的 Kernel。 Softmax 是一种常用的激活函数…
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

LLM演进史(五):构筑自注意力之路——从Transformer到GPT的语言模型未来

前置知识:前面的 micrograd、makemore 系列课程(可选),熟悉 Python,微积分和统计学的基本概念 目标:理解和欣赏 GPT 的工作原理 你可能需要的资料: Colab Notebook 地址 Twitter 上看到的一份很细致的笔记,比我写得好 在…
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

LLM演进史(四):WaveNet——序列模型的卷积革新

本节内容的源代码仓库。 我们在前面的部分搭建了一个多层感知机字符级的语言模型,现在是时候把它的结构变的更复杂了。现在的目标是,输入序列能够输入更多字符,而不是现在的 3 个。除此之外,我们不想把它们都放到一个隐藏层中,避免压缩太多信息。这样得到一个类似WaveNet的更深的模型。…
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

LLM演进史(三):批归一化——激活与梯度的统计调和

本节的重点在于,要对于训练过程中神经网络的激活,特别是向下流动的梯度有深刻的印象和理解。理解这些结构的发展历史是很重要的,因为 RNN (循环神经网络),作为一个通用逼近器 (universal approximator),它原则上可以实现所有的算法…
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

LLM演进史(二):词嵌入——多层感知器与语言的深层连接

本节的源代码仓库地址 本文算是训练语言模型的经典之作,Bengio 将神经网络引入语言模型的训练中,并得到了词嵌入这个副产物。词嵌入对后面深度学习在自然语言处理方面有很大的贡献,也是获取词的语义特征的有效方法。 论文的提出源于解决原词向量(one-hot 表示…
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

LLM演进史(一):Bigram的简洁之道

本节的源代码仓库地址 前面我们通过实现micrograd,弄明白了梯度的意义和如何优化。现在我们可以进入到语言模型的学习阶段,了解初级阶段的语言模型是如何设计、建模的。 Bigram (一个字符通过一个计数的查找表来预测下一个字符。) MLP, 根据 Bengio et al…
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover
cover

从0实现一个极简的自动微分框架

代码仓库:https://github.com/karpathy/nn-zero-to-hero Andrej Karpathy 是著名深度学习课程 Stanford CS 231n 的作者与主讲师,也是 OpenAI 创始人之一,"micrograd" 是他创建的一个小型…
此博客数据所有权由区块链加密技术和智能合约保障仅归创作者所有。