IT Admin Business Leader Data Architect โ This section covers capacity SKU selection, cost optimization strategies, CU smoothing and throttling, capacity sizing for new workloads and migrations, and a TCO/ROI calculator.
Capacity & Cost Management
Understanding Fabric's capacity model and optimizing your spend.
Capacity Units (CUs)
Microsoft Fabric uses a universal compute unit called a Capacity Unit (CU). All workloads โ Spark, SQL, Power BI, Data Factory โ consume from the same CU pool. This simplifies capacity planning compared to provisioning separate services.
SKU Options
| SKU | Capacity Units | Use Case | Pay-As-You-Go / month* | 1-Year Reservation / month* | Savings |
|---|---|---|---|---|---|
| F2 | 2 CUs | POC / Learning | ~$262 | ~$156 | ~40% |
| F4 | 4 CUs | Small team dev | ~$525 | ~$313 | ~40% |
| F8 | 8 CUs | Small production | ~$1,050 | ~$625 | ~40% |
| F16 | 16 CUs | Medium workloads | ~$2,100 | ~$1,251 | ~40% |
| F32 | 32 CUs | Department-level | ~$4,200 | ~$2,501 | ~40% |
| F64 | 64 CUs | Large workloads | ~$8,400 | ~$5,003 | ~40% |
| F128 | 128 CUs | Enterprise | ~$16,800 | ~$10,005 | ~40% |
| F256+ | 256+ CUs | Large enterprise | ~$33,600+ | ~$20,011+ | ~40% |
The prices above cover compute capacity only (CU processing power for Spark, SQL, Power BI, Data Factory, etc.). OneLake storage is billed separately at standard Azure storage rates (~$0.023/GB/month for hot tier). This means you pay for capacity even when paused if data remains stored. Budget for both compute and storage when planning your Fabric costs.
*Approximate pricing (USD, East US region). Check the official pricing page for current rates. 3-year reservations offer even deeper discounts. F64+ includes free Power BI viewer access (no Pro license needed).
Pricing Models
Pay-As-You-Go
Billed per second of compute used. Ideal for variable workloads. Pause capacity when not needed to stop billing.
Reserved Capacity
1-year or 3-year commitment with up to 40% discount. Best for predictable, always-on production workloads.
Fabric Trial
Free 60-day trial with F64 capacity. Great for evaluation and proof-of-concept before committing.
๐งฎ Capacity Cost Calculator
Estimate your Fabric capacity needs based on your expected workload mix. Adjust the sliders to see a real-time SKU recommendation.
Cost Optimization Strategies
- Pause/Resume: Pause non-production capacities after business hours and on weekends โ savings up to 70%
- Right-size capacity: Use the Capacity Metrics App to monitor utilization and resize accordingly
- Smoothing & Bursting: Fabric smooths CU usage over 24-hour windows, allowing short bursts without throttling
- Optimize Spark jobs: Use V-Order optimization, partition pruning, and right-size Spark sessions
- Use Direct Lake: Avoid import-mode datasets that consume memory โ Direct Lake reads from OneLake directly
- Monitor with Capacity Metrics: Install the Microsoft Fabric Capacity Metrics app to track CU consumption by workload
How Capacity Works: Smoothing, Bursting & Throttling
Fabric doesn't enforce CU limits on a per-second basis. Instead, it uses a smoothing mechanism that spreads your CU consumption over time windows, allowing temporary bursts above your SKU limit. Understanding these mechanics is essential for right-sizing and avoiding unexpected throttling.
๐ Smoothing (24-Hour Window)
Every Fabric operation consumes CU-seconds. Instead of evaluating consumption instantly, Fabric smooths it over a rolling 24-hour window. This means a heavy job at 2 AM can be offset by idle time at 3 AM โ your effective utilization is the average, not the peak.
As long as the green smoothed line stays below the blue SKU limit, your capacity is healthy โ even if individual spikes (orange) exceed the limit temporarily.
โก Bursting
Fabric allows your workloads to burst above your SKU's CU allocation for short periods. This is not extra capacity you pay for โ it's borrowed from your future idle time. Bursting is automatic and requires no configuration.
Bursting lets you run a heavy Spark job at 150% of your SKU for 2 hours without throttling โ as long as the rest of your 24-hour window is quiet enough to bring the smoothed average back under the limit.
๐ซ Throttling & Rejection
When your smoothed CU consumption exceeds the SKU limit for too long, Fabric begins throttling. There are two levels of enforcement:
| Stage | Trigger | Impact | What to Do |
|---|---|---|---|
| โ Healthy | Smoothed CU < 100% of SKU | No impact โ all jobs run normally | Keep monitoring |
| โ ๏ธ Throttled | Smoothed CU exceeds 100% (10-min window overage) | Interactive jobs (queries, reports) delayed 20+ seconds | Reduce concurrent jobs or wait for usage to drop |
| ๐ถ Heavy Throttle | Sustained overage across multiple windows | Both interactive & background jobs delayed significantly | Scale up SKU or pause non-critical workloads |
| ๐ Rejected | Extreme sustained overage (24-hour carry-forward full) | New job submissions fail with errors | Immediately scale up or cancel heavy jobs |
๐งฎ Worked Example: F64 Capacity โ A Day in the Life
Here's a concrete scenario for a team running on F64 (64 CUs) to illustrate how smoothing and bursting work together:
| Time Block | Activity | CU Usage | Duration | CU-Hours |
|---|---|---|---|---|
| 12โ6 AM | Idle + scheduled refresh | ~5 CU | 6 hours | 30 |
| 6โ9 AM | Morning ETL pipelines (Spark + Data Factory) | ~120 CU โก | 3 hours | 360 |
| 9 AMโ12 PM | BI queries + Spark notebooks | ~50 CU | 3 hours | 150 |
| 12โ3 PM | Light BI usage, lunch break | ~30 CU | 3 hours | 90 |
| 3โ6 PM | Reports + ad-hoc Spark | ~55 CU | 3 hours | 165 |
| 6 PMโ12 AM | Minimal activity | ~10 CU | 6 hours | 60 |
| Total | 24 hours | 855 CU-hours | ||
| 24h Average = 855 รท 24 | ~35.6 CU โ | |||
Even though the 6โ9 AM ETL burst consumed 120 CU (nearly 2ร the F64 limit), the 24-hour smoothed average is only ~36 CU โ well under the 64 CU limit. The long idle hours from 12โ6 AM and 6 PMโ12 AM "pay back" the burst. No throttling occurs.
If this same team also ran a second heavy ETL job (100+ CU) from 3โ6 PM, the 24-hour average would jump to ~52 CU. Add concurrent Spark notebooks and it could exceed 64 CU โ triggering throttling. The fix: either stagger heavy jobs, optimize them, or scale to F128.
If your capacity is throttled (CU usage exceeds your allocation), jobs will be queued or rejected. Monitor the overages dashboard in the Capacity Metrics app and consider scaling up or optimizing heavy jobs.
Capacity Sizing Guide
How to right-size your Fabric capacity for new workloads and migrations โ avoid over-provisioning and throttling.
Sizing Approach
Capacity sizing is not a one-time exercise โ it's an iterative process. Start with an estimate, deploy, monitor, and adjust. The key principle: size for your typical load, not your peak. Fabric's bursting and smoothing mechanisms handle short spikes automatically.
Sizing for New Workloads
When starting fresh with Fabric, follow this framework:
Step 1: Inventory Your Workloads
Catalog what you plan to run and the expected scale:
| Workload Type | Key Sizing Factors | CU Impact |
|---|---|---|
| Data Ingestion (Data Factory) | Data volume, number of sources, refresh frequency | LowโMedium |
| Spark Notebooks | Data volume, transformation complexity, cluster size, concurrency | High |
| Data Warehouse (T-SQL) | Query complexity, concurrent users, data volume | MediumโHigh |
| Power BI (Direct Lake / Import) | Dataset size, concurrent report viewers, refresh rate | LowโMedium |
| Real-Time Intelligence | Event ingestion rate, query concurrency, retention period | MediumโHigh |
| Data Science / ML | Model training size, experiment frequency, serving load | High |
Step 2: Use the SKU Estimator
Microsoft provides an official Fabric SKU Estimator tool. Input your expected user count, data volumes, refresh rates, and workload mix to get a recommended starting SKU.
Step 3: Start with Trial or Dev Capacity
| Scenario | Recommended Starting SKU | Rationale |
|---|---|---|
| POC / Learning (1-3 users) | F2 or Free Trial (F64) | Minimal cost; trial gives 60 days of F64 for free |
| Small team (5-10 users, <50 tables) | F4 โ F8 | Enough for light Spark jobs + Power BI |
| Department (10-50 users, multiple pipelines) | F16 โ F32 | Concurrent Spark + SQL + BI workloads |
| Enterprise (50+ users, cross-domain) | F64 โ F128 | F64+ enables free Power BI viewing; handles concurrency |
| Large enterprise (multi-region, mission-critical) | F256+ | High concurrency, multiple domains, always-on workloads |
Add 10-15% headroom above your average expected CU usage to account for growth and unexpected spikes. It's better to start one SKU lower and scale up than to over-provision โ Fabric lets you resize capacity at any time.
Sizing for Migrations
When migrating from an existing platform, you have historical usage data to guide your sizing:
From Power BI Premium (P-SKU)
| Power BI Premium SKU | Equivalent Fabric SKU | Capacity Units |
|---|---|---|
| P1 | F64 | 64 CUs |
| P2 | F128 | 128 CUs |
| P3 | F256 | 256 CUs |
| P4 | F512 | 512 CUs |
| P5 | F1024 | 1024 CUs |
You can enable Fabric on your existing P-SKU capacity โ no need to purchase a new one. The same capacity pool now supports all Fabric workloads in addition to Power BI.
From Azure Synapse / Databricks
- Inventory current compute: Document your Spark pool sizes, SQL DWU usage, and pipeline activity hours
- Map to CUs: There's no exact 1:1 mapping. Run representative workloads on a Fabric trial to benchmark actual CU consumption
- Start parallel: Run Fabric and legacy platform side-by-side during migration. Compare performance and CU usage before cutting over
- Use Shortcuts: Point Fabric to your existing ADLS storage via shortcuts โ this lets you test Fabric compute without moving data
From On-Premises (SQL Server / SSIS)
- Measure peak CPU and memory on existing SQL Servers during ETL windows
- Start with F8-F16 for most departmental SQL Server migrations
- Test with mirroring: Use Fabric mirroring to replicate your SQL databases to OneLake and measure CU impact before full migration
- Account for concurrency: Cloud workloads often see higher concurrency than on-prem โ size accordingly
Ongoing Optimization
๐ Monitor with Capacity Metrics
Install the Capacity Metrics app from day one. Track CU consumption by workload type, identify throttling events, and spot optimization opportunities.
โธ๏ธ Pause Non-Production
Pause dev/test capacities outside business hours and on weekends. This alone can save 60-70% on non-production capacity costs.
๐ Leverage Bursting & Smoothing
Fabric smooths CU consumption over 24-hour windows and allows bursting up to the capacity limit. Size for your average load, not peak โ bursting handles spikes.
๐ฏ Optimize Before Scaling
Before upgrading your SKU, optimize: V-Order on tables, efficient Spark sessions, proper partitioning, and well-designed DAX. Fix the bottleneck, not the capacity.
1. Over-provisioning from day one โ start small, monitor, scale up. 2. Ignoring concurrency โ multiple users running Spark jobs simultaneously consume far more CUs than sequential runs. 3. Not using the Capacity Metrics app โ flying blind leads to either overspending or throttling. 4. Forgetting that F64 is the minimum for free Power BI viewer access โ factor this into your licensing decision.
๐ Learn More
Fabric Licenses & SKUs โ Fabric Operations โ๐ See Also
Operations โ CI/CD & Deployment โ๐ฐ TCO / ROI Calculator
Compare your current data platform spend against Microsoft Fabric to estimate savings and build a business case.
Enter your current monthly costs for each category below. The calculator estimates the equivalent Fabric cost based on published pricing and typical migration benchmarks, then shows potential savings and a recommended SKU.
Current Monthly Spend
This calculator provides directional estimates based on published Fabric pricing and typical migration savings reported in Microsoft case studies. Actual savings depend on workload complexity, optimization, and usage patterns. For precise estimates, use the official SKU Estimator and run a pilot.