PyTorch Introduction
Get familiar with PyTorch, its ecosystem, and workflow.
- What is PyTorch?
- Core Components: Tensors, Autograd, Modules
- PyTorch Workflow
- Ecosystem Overview
- Setting up the Development Environment
Working with PyTorch Tensors
Learn how to create, manipulate, and optimize tensors.
- Scalars, Arrays, and Matrices
- Datatypes and Attributes
- Manipulating Tensors
- Tensor Operations (matmul, etc.)
- Reshaping Tensors (reshape, squeeze, permute)
- Creating Views and Stacking
- Using the GPU
- Common Issues with Tensors
Training Our First Model
In this module, you will build and train a simple linear regression model.
This will introduce you to the fundamental concepts of model training in PyTorch.
- Single Neuron Linear Regression
- Weights and Biases
- Loss Function
- Backpropagation
- Optimizers
- Learning Rate
- Epochs
- Building a Training Loop
- Visualizing the Model
- Making Predictions
Testing Our First Model
Learn the common strategies for testing machine learning models.
- Splitting Data into Training, Validation, and Test Sets
- Evaluation Functions
- Metrics: Accuracy, F1, Confusion Matrix
- Building a Test Loop
- Visualizing Training Processes
- Overfitting and Underfitting
Building a Classification Model
Create and train a classification model using PyTorch.
- Understanding Classification Problems
- Binary vs Multi-Label Classification
- Analyze Inputs and Outputs
- Turning Data into Tensors
- Setting up Sequential Layers
- Loss Functions for Classification
- Breaking Linearity
- Activation Functions: ReLU, Sigmoid
- From Probability to Classification
- Saving and Loading Models
Data Preprocessing
Before we can start working on more realistic problems. We need to
understand how to prepare our data for effective machine learning.
- Data Normalization
- Categorical Encoding
- Handling Missing Data
- Data Augmentation
- Preprocessing Pipelines
- Batching and Shuffling
Datasets and DataLoaders
Learn how to manage and load data efficiently.
- What is a Dataset?
- What is a DataLoader?
- Built-in Datasets in PyTorch
- Creating a Custom Dataset
- Using DataLoader for Batching
- Shuffling and Parallel Loading
- Integration with Transforms
- Practrical Example with Multi-Label Classification
Lightning
Lightning helps you write cleaner, more scalable, and production-ready code by
removing boilerplate from training loops.
It automates repetitive tasks like checkpointing, logging, and distributed training,
so you can focus on the model logic instead of infrastructure details.
- Benefits of Using Lightning
- Migrating our PyTorch Model
- LightningModule and LightningDataModule
- Trainer
- Callbacks
- Checkpointing
- Managing Configuration & Hyperparameters
Experiment Tracking and Visualization with W&B
Track experiments and visualize results using Weights & Biases.
- Why We Need Experiment Tracking
- Using W&B with PyTorch and Lightning
- Viewing Metrics, System Info, and Artifacts
- Optimizing Hyperparameters
- Deployment Options
Optimizing Our Model
Improve model performance through various strategies.
- Building a Base Model
- Improving Data
- Cross-Validation
- Improving Model Architecture
- Changing Training Strategy
- Hyperparameter Search
- Regularization
Exploring Convolutional Neural Networks
Learn CNN fundamentals and apply them to vision tasks.
- Understanding Computer Vision Problems
- Analyze Inputs and Outputs
- Basics of CNNs
- Convolutional and MaxPool Layers
- Using Hardware Acceleration
- Evaluating and Improving the Model
Feature Extraction
Instead of building a complete model yourself, you can use pre-trained models and tame them for your use case.
- Exploring Pre-trained Models
- Transfer Learning
- Freezing Parameters
- Adding a new Classification Layer
Transformer Architecture in a Nutshell
Before we can do anything with an LLM, we need to understand its building blocks.
- Tokenization and Embedding
- Positional Information
- Self-Attention Mechanism
- Overall Structure
Fine-Tuning an LLM with LoRA
Face it, you are not going to train an LLM from scratch. But you can fine-tune a pre-trained model for your specific use case.
Learn parameter-efficient fine-tuning for large language models.
- Why We Need Parameter-Efficient Fine-Tuning
- The Core Idea of Low-Rank Adaptation
- Practical Benefits of Using LoRA
- Fine-Tuning a Small LLM Using LoRA
By the end of this course, participants will have a solid understanding of PyTorch and Lightning,
enabling them to build, train, and optimize neural networks with confidence.
You’ll learn how to work with tensors, create models from scratch,
and implement best practices for training and evaluation.
Beyond the fundamentals, this course equips you with the skills to scale your workflows using Lightning,
track experiments with Weights & Biases, and explore techniques such as convolutional neural networks, Transformers,
and parameter-efficient fine-tuning.
This training is designed to help you move from theory to practice—giving you hands-on experience
with tools and techniques that are widely used in industry and research.
This 5-day course is intended for data scientists, machine learning engineers, and AI researchers
who want hands-on experience in building and deploying neural networks using PyTorch and Lightning.
Participants should have a basic understanding of Python programming and some familiarity with machine learning concepts.