Menu
Avatar
The menu of my blog
Quick Stats
Quests
30 Quests
Messages
2 Messages
Playback
5 Playback
Items
6 Items
Skills
2 Skills
Trace
1 Trace
Message

The Sword Art Online Utilities Project

Welcome, traveler. This is a personal blog built in the style of the legendary SAO game interface. Navigate through the menu to explore the journal, skills, and item logs.

© 2020-2026 Nagi-ovo | RSS | Breezing

Quests

クエスト

Active Quest List / Archiving...

Let's Build Robots! Annual Review After Graduating from AI Undergrad

Let's Build Robots! Annual Review After Graduating from AI Undergrad

Jan 5, 2026, 10:35 PM 12 min read

Happy New Year 🎆

Annual Review
Building a Blog from Scratch: Do I Still Have the Passion?

Building a Blog from Scratch: Do I Still Have the Passion?

Jan 1, 2026 12 min read

Fulfilling a decade-old dream: building an SAO-themed blog and documenting the MDsveX writing syntax

designmdsvexsveltekit
Ditching the SDEs: A Simpler Path with Flow Matching

Ditching the SDEs: A Simpler Path with Flow Matching

Oct 3, 2025, 12:11 AM 35 min read

Flow Matching gives us a fresh—and simpler—lens on generative modeling. Instead of reasoning about probability densities and score functions, we reason about vector fields and flows.

DLFlow-Matching
Visual Language Models, with PaliGemma as a Case Study

Visual Language Models, with PaliGemma as a Case Study

May 22, 2025, 02:35 PM 45 min read

Thanks to Umar Jamil’s excellent video tutorial. Vision-language models can be grouped into four categories; this post uses PaliGemma to unpack VLM architecture and implementation details.

Deep LearningMultimodal
From RL to RLHF

From RL to RLHF

May 8, 2025, 02:15 PM 50 min read

This article is primarily based on Umar Jamil's course for learning and recording purposes. Our goal is to align LLM behavior with our desired outputs, and RLHF is one of the most famous techniques for this.

Deep LearningRLHFLLM
Implementing Simple LLM Inference in Rust

Implementing Simple LLM Inference in Rust

Feb 7, 2025, 02:48 PM 40 min read

I stumbled upon the 'Large Model and AI System Training Camp' hosted by Tsinghua University on Bilibili and signed up immediately. I planned to use the Spring Festival holiday to consolidate my theoretical knowledge of LLM Inference through practice. Coincidentally, the school VPN was down, preventing me from doing research, so it was the perfect time to organize my study notes.

LLMRustmlsys
2024 Year in Review

2024 Year in Review

Jan 1, 2025, 05:27 AM 15 min read

2024 marked my first encounter with deep learning and my entry into the field of large models. Perhaps looking back someday, this year will stand out as one of many pivotal choices.

Annual Summary
The Intuition and Mathematics of Diffusion

The Intuition and Mathematics of Diffusion

Dec 13, 2024, 10:02 AM 40 min read

Deeply understand the intuitive principles and mathematical derivations of diffusion models, from the forward process to the reverse process, mastering the core ideas and implementation details of DDPM.

Deep LearningDiffusion
Let's build AlphaZero

Let's build AlphaZero

Nov 26, 2024, 02:07 PM 35 min read

Starting from the design principles of AlphaGo and diving deep into the core mechanisms of MCTS and Self-Play, we reveal step-by-step how to build an AI Gomoku system that can surpass human capabilities.

Deep LearningReinforcement LearningMCTSSelf-Play
PPO Speedrun

PPO Speedrun

Nov 14, 2024, 07:31 AM 25 min read

Quickly understand the core ideas and implementation details of the PPO (Proximal Policy Optimization) algorithm, and master this important method in modern reinforcement learning.

RLPPODeep Learning
Introduction to Knowledge Distillation

Introduction to Knowledge Distillation

Nov 3, 2024, 02:56 PM 35 min read

Learn the basic principles of Knowledge Distillation and how to transfer knowledge from large models (teachers) to small models (students) for model compression and acceleration.

Deep LearningKnowledge Distillation
The Journey of Cracking the Follow Invite Code

The Journey of Cracking the Follow Invite Code

Oct 31, 2024, 07:05 AM 5 min read

Documenting the full process of cracking a Follow invite code, learning about LSB steganography and the use of the StegOnline tool.

Follow
A First Look at Actor-Critic Methods

A First Look at Actor-Critic Methods

Oct 10, 2024, 02:18 PM 25 min read

Exploring the Actor-Critic method, which combines the strengths of policy gradients (Actor) and value functions (Critic) for more efficient reinforcement learning.

actor-criticReinforcement LearningRL
From DQN to Policy Gradient

From DQN to Policy Gradient

Oct 6, 2024, 10:45 AM 30 min read

Exploring the evolution from value-based methods (DQN) to policy-based methods (Policy Gradient), and understanding the differences and connections between the two.

RLReinforcement Learning
Reinforcement Learning Basics and Q-Learning

Reinforcement Learning Basics and Q-Learning

Oct 2, 2024, 06:17 PM 40 min read

Learning the fundamental concepts of Reinforcement Learning from scratch, and deeply understanding the Q-Learning algorithm and its application in discrete action spaces.

RLAI
LoRA in PyTorch

LoRA in PyTorch

Oct 1, 2024, 05:32 PM 25 min read

Learn how to implement LoRA (Low-Rank Adaptation) in PyTorch, a parameter-efficient fine-tuning method.

LoRAPEFTPyTorch
Vector Add in Triton

Vector Add in Triton

Sep 19, 2024, 03:06 PM 20 min read

Starting from simple vector addition, learn how to write Triton kernels and explore performance tuning techniques.

TritonDeep LearningAI
Softmax in OpenAI Triton

Softmax in OpenAI Triton

Sep 14, 2024, 05:41 PM 30 min read

Learn how to write efficient GPU kernels using OpenAI Triton, implementing the Softmax operation and understanding Triton's programming model.

TritonDeep LearningPython
Introduction to Policy Gradient

Introduction to Policy Gradient

Sep 12, 2024, 12:03 PM 25 min read

Learning the fundamental principles and implementation of policy gradient methods, and understanding how to train reinforcement learning agents by directly optimizing the policy.

RLReinforcement LearningPolicy Gradient
Configuring Ubuntu 20.04 on WSL2

Configuring Ubuntu 20.04 on WSL2

Aug 20, 2024, 08:51 AM 10 min read

Documenting the complete process of configuring WSL2 and Ubuntu 20.04 on Windows 11, including disk migration, network configuration, and deep learning environment setup.

WSLEnvironment Configuration
History of LLM Evolution (6): Unveiling the Mystery of Tokenizers

History of LLM Evolution (6): Unveiling the Mystery of Tokenizers

Jul 4, 2024, 04:42 PM 50 min read

Deeply understand how tokenizers work, learning about the BPE algorithm, the tokenization strategies of the GPT series, and implementation details of SentencePiece.

LLMAITokenizerBPENLP
History of LLM Evolution (5): Building the Path of Self-Attention — The Future of Language Models from Transformer to GPT

History of LLM Evolution (5): Building the Path of Self-Attention — The Future of Language Models from Transformer to GPT

Mar 20, 2024, 08:49 AM 60 min read

Building the Transformer architecture from scratch, deeply understanding core components like self-attention, multi-head attention, residual connections, and layer normalization.

LLMGPTDeep LearningTransformer
The Way of Fine-Tuning

The Way of Fine-Tuning

Mar 15, 2024, 02:46 PM 20 min read

Learn how to fine-tune large language models under limited VRAM conditions, mastering key techniques like half-precision, quantization, LoRA, and QLoRA.

AILLMFine-tuning
History of LLM Evolution (4): WaveNet — Convolutional Innovation in Sequence Models

History of LLM Evolution (4): WaveNet — Convolutional Innovation in Sequence Models

Mar 9, 2024, 04:01 PM 30 min read

Learn the progressive fusion concept of WaveNet and implement a hierarchical tree structure to build deeper language models.

AIDeep LearningLLM
History of LLM Evolution (3): Batch Normalization — Statistical Harmony of Activations and Gradients

History of LLM Evolution (3): Batch Normalization — Statistical Harmony of Activations and Gradients

Feb 29, 2024, 03:44 PM 35 min read

Deeply understand the activation and gradient issues in neural network training, and learn how batch normalization solves the training challenges of deep networks.

Deep LearningAI
The State of GPT

The State of GPT

Feb 18, 2024, 08:16 PM 30 min read

A structured overview of Andrej Karpathy's Microsoft Build 2023 talk, deeply understanding GPT's training process, development status, the current LLM ecosystem, and future outlook.

AIChatGPTLLMGPTNLP
History of LLM Evolution (2): Embeddings — MLPs and Deep Language Connections

History of LLM Evolution (2): Embeddings — MLPs and Deep Language Connections

Feb 17, 2024, 09:48 PM 25 min read

Exploring Bengio's classic paper to understand how neural networks learn distributed representations of words and how to build a Neural Probabilistic Language Model (NPLM).

AILLMDeep LearningEmbeddingsNeural Networks
History of LLM Evolution (1): The Simplicity of Bigram

History of LLM Evolution (1): The Simplicity of Bigram

Feb 17, 2024, 11:05 AM 20 min read

Starting with the simplest Bigram model to explore the foundations of language modeling. Learn how to predict the next character through counting and probability distributions, and how to achieve the same effect using a neural network framework.

AIDeep LearningLLMLanguage Models
Building a Minimal Autograd Framework from Scratch

Building a Minimal Autograd Framework from Scratch

Feb 16, 2024, 10:28 AM 25 min read

Learning from Andrej Karpathy's micrograd project, we build an automatic differentiation framework from scratch to deeply understand the core principles of backpropagation and the chain rule.

Deep LearningAIPyTorchAutogradNeural Networks
Turning 21

Turning 21

Dec 1, 2023, 04:00 PM 5 min read

21st birthday summary, reviewing the growth and gains of the past year.

Happy Birthday
Session 00:00:00