Interested in a private company training? Request it here.
In a course like this many different topics are discussed, so we start this training by briefly discussing the different topics and show how they relate to each other.
Sometimes people wonder if their SQL Server needs more CPU power. In this module we see how SQL Server schedules queries to workers for running them on a thread. A very important concept is looking into wait statistics, where we basically learn to ask SQL Server what it's waiting upon.
SQL Server stores its data on disk. In this module we discuss how data for regular data structures is stored, how the data can be spread over multiple disks and we discuss common performance pitfalls people encounter when they setup a SQL Server database.
SQL Server cannot run queries on data stored on disk. It must first be loaded into main memory before it can be used. But how does SQL Server decide how long to cache data in memory, how can we inspect what data is cached right now, and what else besides data is kept in memory? These are the questions we answer in this module.
When developing the tables within a database we have to take care as well. In this module we discuss the impact that data types have on the size of a row, and bigger rows often result in slower queries. Another thing to worry about are the implicit data type conversions, which can cause SQL Server a lot of extra work, or can even result in SQL Server not being able to use some indexes.
This is the most important tool to improve SQL Server performance. We first discuss for each of the three basic storage options (heaps, clustered indexes and non-clustered indexes) how the data is stored and the influence of this on SELECT, INSERT, UPDATE and DELETE statements. Then we switch over to how statistics are used by the SQL Server Query Optimizer to decide which index to use when queries are being executed by SQL Server.
Having an index is one thing, using the index is another story: how can we see which indexes SQL Server uses and how it's using them? Execution plans are the answer to that question. We discuss in this part of the training how to get execution plans and how to analyze them. This is done using the traditional techniques we already have in SQL Server for many years as well as with the Query Store, which is new since SQL Server 2016 and is also available in Azure SQL Databases.
This module combines the skills we gained in the two previous modules. We see how changing queries, indexes and constraints has an influence on the execution plan ad performance of a query.
The SQL Server cardinality estimator uses statistics to make an estimate of the number of rows returned by operations such as joins and filters. These estimates are then used by the query optimizer to build execution plans. Microsoft changed in the 2014 and 2016 version of SQL Server how these estimates are computed. In this module we dive into these changes, discuss the overall benefit of the new estimates, but also discuss how you can keep on using the old ones if they did a better job for certain queries.
A database must store data in a consistent way. But if everybody can change all the data in parallel, we lose transactional consistency. This module discuss how SQL Server provides us with some options for allowing sessions in parallel to access the same data yet keeping this data transactional consistent.
To apply performance optimizations in practice we must first monitor the SQL Server to identify the types of performance problems we have. But ideally we start monitor the SQL Server before problems arrive. This way we establish a baseline against which we can compare the monitored values when things start to go wrong. In this module we discuss different types of monitoring tools in SQL Server.
The main usage of ColumnStore indexes is to improve query performance for data warehouses and data marts workloads. This chapter describes how ColumnStore indexes store data in a columnar format instead of the row-based storage that is used by 'classic' tables and indexes in SQL Server. Then you will learn how to create columnstore indexes and strategies for using them in On-Premise and Azure SQL Databases.
In-Memory OLTP can significantly improve the performance of transaction processing, data ingestion and data load, and transient data scenarios in SQL Server on-premise and Azure SQL Databases. In-Memory OLTP improves performance of transaction processing tables by removing lock and latch contention between concurrently executing transactions.
This course is designed to give the right amount of internals knowledge and wealth of practical tuning & optimization techniques that you can put into production. This 5-day class offers a comprehensive coverage of SQL Server architecture, indexing and statistics strategies, optimize transaction log operations, tempdb and data file configuration, transactions and isolation levels, and locking and blocking. The course also teaches how to create baselines and benchmark SQL Server performance, how to analyze workload and figure out where performance problems are, and how to fix them. The course has a special focus on SQL Server I/O, CPU usage, memory usage, query plans, statement execution, parameter sniffing and procedural code, deadlocking, plan cache, wait and latch statistics, Extended Events, DMVs and PerfMon.
This course targets both SQL Server 2022 on-premises (or earlier) and cloud based solutions (Azure SQL Databases or Azure Managed Instances).
The primary audience for this course is individuals who develop, administer and maintain on-premises and Azure SQL Server databases and are responsible for optimal performance of SQL Server instances that they develop or manage. These individuals also write queries against data and need to ensure optimal execution performance of the workloads.