IT Admin BI Developer Platform Owner โ This section covers CI/CD deployment patterns, Git integration, and step-by-step migration strategies from Synapse, ADF, Power BI Premium, and on-premises systems.
Deployment Patterns
CI/CD, Git integration, and lifecycle management for Fabric artifacts.
Development Lifecycle
A well-structured deployment strategy ensures reliability and consistency across environments. Microsoft Fabric supports a full Dev โ Test โ Prod lifecycle with built-in tools.
Fabric Deployment Pipelines
Deployment pipelines are Fabric's built-in tool for managing the lifecycle of your artifacts (lakehouses, warehouses, reports, notebooks). They let you promote artifacts through stages with comparison and selective deployment.
- Up to 10 stages per pipeline (typically Dev โ Test โ Prod)
- Compare artifact differences between stages before deploying
- Deploy specific items or all items at once
- Configure deployment rules to parameterize connections per environment
- Supports backward deployment (e.g., rolling back from Prod to Test)
Git Integration
Fabric supports direct integration with Azure DevOps and GitHub repositories. This enables version control, code review, and collaboration workflows for Fabric artifacts.
Use Git integration for development (version control, branching, PRs) and deployment pipelines for promotion (Dev โ Test โ Prod). They complement each other.
Git Integration Setup
- Connect workspace to a Git repo (Azure DevOps or GitHub)
- Each artifact is serialized into JSON/PBIR definitions in the repo
- Use feature branches for development, merge to main via PR
- Auto-sync: changes in Git can auto-update the workspace (and vice versa)
CI/CD Patterns
| Pattern | Description | Recommended For |
|---|---|---|
| Pipeline-only | Use Fabric deployment pipelines exclusively | Small teams, simpler workflows |
| Git + Pipelines | Git for source control + pipelines for promotion | Most teams (recommended) |
| Full CI/CD | Azure DevOps / GitHub Actions + Fabric REST APIs | Enterprise teams with strict governance |
| Python CI/CD (fabric-cicd) | Code-first deployments with Python SDK + CI/CD pipelines | Teams wanting programmatic, deterministic deployments |
Python CI/CD with fabric-cicd
The fabric-cicd library is Microsoft's open-source Python SDK for automating Fabric workspace deployments. It provides deterministic, code-first deployments that integrate seamlessly with GitHub Actions and Azure DevOps pipelines.
Installation
pip install fabric-cicd
Key Capabilities
- Deterministic deployments: Full, predictable workspace deployments every time โ no manual workspace operations needed
- Source-control integration: Deploy directly from Git repository directories (GitHub, Azure DevOps) into target workspaces
- Environment parameterization: Use
parameter.ymlfiles to manage environment-specific values (connection strings, workspace IDs, lakehouse names) - Flexible authentication: Supports all Azure
TokenCredentialmethods โ user identity, service principal, and managed identity - Supported item types: Notebooks, Data Pipelines, Semantic Models, Reports, Lakehouses, Environments, and more
Deployment Example
from fabric_cicd import FabricWorkspace, publish_all_items
from azure.identity import ClientSecretCredential
# Authenticate with service principal
credential = ClientSecretCredential(
tenant_id="your-tenant-id",
client_id="your-client-id",
client_secret="your-client-secret"
)
# Define the target workspace
workspace = FabricWorkspace(
workspace_id="your-workspace-id",
environment="PROD", # Maps to parameter.yml values
repository_directory="./fabric-items", # Local repo directory with item definitions
item_type_in_scope=["Notebook", "DataPipeline", "SemanticModel", "Report"],
credential=credential
)
# Deploy all items
publish_all_items(workspace)
GitHub Actions Integration
name: Deploy to Fabric
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Install fabric-cicd
run: pip install fabric-cicd
- name: Deploy to Production
env:
AZURE_TENANT_ID: ${{ secrets.AZURE_TENANT_ID }}
AZURE_CLIENT_ID: ${{ secrets.AZURE_CLIENT_ID }}
AZURE_CLIENT_SECRET: ${{ secrets.AZURE_CLIENT_SECRET }}
run: python scripts/deploy.py --env PROD
Use service principals (not user accounts) for CI/CD automation. Create a dedicated app registration in Entra ID, grant it workspace Contributor access, and store credentials as CI/CD secrets. Use parameter.yml to parameterize environment-specific values so the same codebase deploys to Dev, Test, and Prod.
Environment Parameterization
find_replace:
- find: "your-dev-lakehouse-id"
replace_with:
DEV: "dev-lakehouse-id"
TEST: "test-lakehouse-id"
PROD: "prod-lakehouse-id"
- find: "your-dev-connection"
replace_with:
DEV: "dev-connection-string"
TEST: "test-connection-string"
PROD: "prod-connection-string"
Environment Separation
- Use separate workspaces per environment (ws-dev, ws-test, ws-prod)
- Use separate capacities for production vs. non-production workloads
- Apply deployment rules to swap connection strings per environment
- Restrict production workspace access to service principals or limited admins
Migration Strategies
Moving from existing platforms to Microsoft Fabric with minimal risk.
Migration Decision Framework
Migration is not one-size-fits-all. Use a phased approach to reduce risk and ensure continuity:
๐ Interactive Migration Path Recommender
Select your current platform to see a step-by-step tailored migration path to Fabric.
Migration Paths
From Azure Synapse Analytics
- Dedicated SQL Pools โ Fabric Warehouse: Migrate T-SQL scripts and stored procedures. Most T-SQL is compatible; review unsupported features (e.g., some system views)
- Serverless SQL Pools โ Fabric Lakehouse: Replace OPENROWSET queries with Lakehouse SQL endpoints
- Synapse Spark โ Fabric Notebooks: PySpark/Scala code largely compatible; update library references and session configs
- Use Shortcuts to connect Fabric to existing ADLS Gen2 storage during transition
From Azure Data Factory (ADF)
- Fabric Data Factory supports most ADF pipeline activities natively
- Use copy activity equivalents and Dataflows Gen2 for transformation
- Some connectors may differ โ validate your source/sink connectors
- Consider running ADF and Fabric in parallel during migration
From Power BI Premium
- Power BI workspaces can be reassigned to Fabric capacity (F64 or higher)
- Existing datasets, reports, and dashboards continue to work unchanged
- Migrate import-mode datasets to Direct Lake for better performance and cost savings
- P SKUs map to equivalent F SKUs (P1 โ F64, P2 โ F128, etc.)
From On-Premises (SQL Server / SSIS)
- Use Data Factory pipelines with on-premises data gateway for initial data movement
- Migrate SSIS packages to Dataflows Gen2 or Fabric notebooks
- Use Fabric mirroring to replicate SQL Server databases to OneLake in near real-time
- Plan for a hybrid phase where on-prem and Fabric run in parallel
๐ ๏ธ Official Migration Tools & Resources
Select your source platform to see the official Microsoft tools, step-by-step guidance, and current migration support status.
Azure Synapse Analytics
Dedicated SQL pools, Spark pools, Synapse pipelines
Official Tools AvailableAzure Data Factory
ADF pipelines, linked services, data flows
Migration Assistant AvailablePower BI Premium (P-SKU)
P1โP5 capacity with reports, datasets, dashboards
Guided Migration PathOn-Premises SQL / SSIS
SQL Server databases, SSIS packages, SSRS reports
Partial ToolingDatabricks
Spark workspaces, Delta tables, MLflow experiments
Manual MigrationSnowflake
Snowflake warehouses, stages, stored procedures
Mirroring + Manual1. Doing a "big bang" migration โ always migrate incrementally. 2. Ignoring data quality issues in the source โ clean data before or during migration. 3. Not testing performance with production-scale data. 4. Forgetting to migrate security policies and access controls. 5. Underestimating change management โ train your team!
๐ Learn More
Plan First Deployment โCapacity planning, SKU selection, cost optimization, and TCO/ROI calculator have moved to the Capacity Planning page.