Part 1: Advanced Data Ingestion and Engineering Patterns
Building Dynamic Pipelines
Microsoft Fabric Pipelines are used to ingest data into Fabric. By using expressions, variables and parameters,
you learn how to make dynamic pipelines.
- Working with Expressions
- Reusing activity output
- Variables and Parameters
- Using Looping and Conditional Logic in pipelines
- Debugging a pipeline
- LAB: Authoring and debugging advanced Pipelines
Incremental data ingestion with pipelines
Microsoft Fabric Pipelines support efficient incremental data ingestion. Learn how to detect data changes,
process only new or modified data, and build scalable ingestion patterns.
- Incremental ingestion concepts and use cases
- Using watermarks and high-water-mark patterns
- Change detection strategies (timestamps, keys, CDC)
- Implementing incremental logic in Fabric pipelines
- Handling late-arriving and updated data
- LAB: Building an incremental ingestion pipeline
Working with Delta Tables
Delta Lake is an optimized storage layer that provides the foundation for storing data and tables in a Fabric
lakehouse.
Learn how to create, query and optimize Delta Tables in a Microsoft Fabric.
- What is a Delta Lake?
- Working with Delta Tables
- Managing Schema changes
- Versioning and Optimizing Delta Tables
- LAB: Working with Delta Tables
Materialized lake views
Materialized Lake Views store precomputed query results in OneLake to improve performance and reuse data across
Fabric.
- Materialized Lake Views architecture and storage model
- Authoring Materialized Lake Views with SQL
- Refresh behavior and incremental processing
- Performance and optimization
- Consuming Materialized Lake Views across Fabric workloads
- Security, governance, and lineage considerations
Prepare Streaming Data with Fabric Eventstreams
Fabric Eventstreams provide a native, low-code way to ingest, process, and route real-time event data into
EventHouses, OneLake
and other Fabric destinations, enabling streaming analytics scenarios alongside batch workloads.
- Eventstreams architecture and core concepts
- Supported event sources and destinations
- Ingesting events from external systems and Azure services
- Basic event transformations and filtering
- Delivering streaming data to Lakehouse, Warehouse, and KQL databases
- Monitoring event flow, throughput, and errors
- LAB: Building an end-to-end streaming pipeline with Eventstreams
Implementing Microsoft Fabric Mirroring
Fabric Mirroring enables near real-time replication of data from operational systems into OneLake, allowing
analytics workloads to run directly on continuously updated source data without complex ingestion pipelines.
- What Fabric Mirroring is and when to use it
- Supported source systems and prerequisites
- Setting up a mirrored database in Fabric
- Understanding change data capture (CDC) and latency
- Accessing mirrored data through Lakehouse and Warehouse endpoints
- Security, schema evolution, and operational considerations
- Using mirrored data for analytics and Power BI reporting
- LAB: Creating and querying a mirrored database
Part 2: Intelligent Analytics and Data Activation
Applying Fabric IQ with Ontologies
Fabric IQ brings AI-powered intelligence into Microsoft Fabric by grounding generative AI experiences in your
data.
Ontologies provide the semantic layer that helps Fabric IQ understand business concepts, relationships, and
context,
enabling more accurate insights, queries, and Copilot experiences.
- Overview of Fabric IQ and AI-enabled experiences in Fabric
- Ontologies in Fabric IQ
- Defining business entities, relationships, and metadata
- Connecting ontologies to Lakehouse and Warehouse data
- Using ontologies to improve Copilot queries and insights
- Governance, security, and lifecycle management of ontologies
- Best practices for modeling semantic knowledge in Fabric
- LAB: Creating an ontology and using Fabric IQ to explore data
Working with Fabric Data Agents
With a Fabric data agent, your team can have conversations, with plain English-language questions, about the data
that your organization stored in Fabric OneLake and then receive relevant answers. This way, even people without
technical expertise in AI or a deep understanding of the data structure can receive precise and context-rich
answers.
- The Purpose of the Fabric Data Agents
- Creating and Publishing a Fabric Data Agent
- Interacting with a Fabric Data Agent
- Understanding Permission Delegation
- Finetuning the Fabric Data Agent with Instructions and Examples
Data Activator
Data Activator in Microsoft Fabric takes action based on what's happening in your data.
Learn how to setup conditions against your data and trigger actions like run a Power Automate Flow when the
conditions are met.
- Creating and using Reflexes
- Defining Triggers, Conditions and Actions
- Getting data from Reports or Eventstreams
- LAB: Use Data Activator in Fabric
Fabric User Data Functions and Translytical Task Flow
Fabric User Data Functions enable you to encapsulate reusable business logic directly within Microsoft Fabric,
supporting translytical task flows that seamlessly combine analytical insights with operational actions across
notebooks, pipelines, and Power BI.
- Overview of User Data Functions and translytical task flow concepts
- Creating User Data Functions in Microsoft Fabric
- Implementing functions using notebooks and Spark
- Invoking User Data Functions from notebooks and pipelines
- Using User Data Functions in Power BI to drive translytical actions
- Parameter handling, performance, and scalability considerations
- Security, versioning, and governance of shared business logic
- LAB: Building and executing a translytical task flow with User Data Functions
This advanced course helps experienced Microsoft Fabric users move from basic implementations to
production-ready, scalable analytics solutions.
Participants learn to apply advanced engineering and architectural patterns that improve reliability,
performance, and reuse, while enabling intelligent and action-driven analytics.
The focus is on making informed design choices and effectively combining ingestion, storage, analytics, and
automation to support real-world data workloads.
This course is intended for data engineers and analytics professionals who already have hands-on experience with
Microsoft Fabric or
have completed the Data Engineering with Microsoft Fabric course.
It is aimed at professionals who want to deepen their expertise and take responsibility for designing and
operating robust, scalable and intelligent data platforms in Microsoft Fabric.