This page defines the capacity-level configuration options that must be evaluated for our Microsoft Fabric platform, with a focus on:

  • production stability
  • workload isolation
  • controlled scaling
  • governance of shared resources
  • independence of the Data Platform Core  workspace

Our target operating model uses Fabric primarily as a data storage and exposure platform based on Lakehouse and Warehouse, serving both BI consumption and external data exposure.

Fabric capacity configuration is a platform governance topic, not only an infrastructure topic.

The primary control for protecting Data Platform Core  is capacity isolation.

Shared capacity between Data Platform Core  and Domain production creates shared operational risk.

A dedicated capacity for Data Platform Core  is the recommended baseline for this architecture.


Version

Date

Description

Contributor

V0.1

 

Initial document

COLOMBANI Théo

V0.2

 

Updated with schema proposal and checklist

COLOMBANI Théo




Objective

This page defines the main capacity design decisions for our Microsoft Fabric platform.

The objective is to ensure:

In our architecture, Microsoft Fabric is primarily used as a storage and exposure platform, based on Lakehouse and Warehouse, for both BI consumption and external data exposure.

Platform Context

Data Platform Core workspace

Domain workspaces

Key requirement

The Data Platform Core workspace must remain operational independently from Domain workspaces, including when Domain workloads generate higher or less predictable compute consumption.

Key messages

What we should implement

Recommended target model

Capacity design principles

Proposed capacity design



Decision guide

Decision areaUse this approach whenRecommended decision
Dedicated capacity for Data Platform Corebronze and silver are production-critical; downstream BI or external exposure depends on them; Domain workloads are more variable than Core workloads; Core continuity is a priorityassign Data Platform Core to a dedicated capacity
Separate Domain production capacitymultiple Domain workspaces coexist; Domain workloads may create contention; business-facing usage is less predictable; Domain growth should not affect Core operationsassign Domain production to a separate capacity
Separate non-production capacitydevelopment and testing are active; experimentation may generate compute spikes; production stability must be protected from non-production activitykeep non-production on a separate capacity
Architecture review requiredCore and Domain are still planned on the same capacity; workspace reassignment is not tightly governed; production and non-production still share capacity; capacity sizing issues become recurrentescalate to architecture review

Checklist

Recommended configuration matrix

SettingData Platform CoreDomain ProductionNon-ProductionRecommendation
Dedicated capacityYesPreferredSeparateMandatory for Data Platform Core
Shared with Data Platform CoreNoNoNoNot allowed
Workspace reassignment rightsVery restrictedRestrictedControlledGovern centrally
MonitoringMandatoryMandatoryRecommendedStandard operating baseline
DR assessmentMandatoryCase by caseNot priorityExplicit decision required
Spark governanceCase by caseCase by caseFlexibleOnly where relevant
Scaling review cadenceRegularRegularPeriodicMetrics-driven

Detailed design sections

Design principle

Recommendation

Capacity design must be driven by isolation first, then by optimization.

Why it matters

In our context, the main purpose of capacity governance is not only to size compute correctly. It is primarily to:

Decision statement

For our platform, capacity is an architecture boundary, not only a billing or administration object.


Recommended target model

Capacity A — Data Platform Core

Used only for:

Capacity B — Domain Production

Used for:

Capacity C — Non-Production

Used for:

Recommendation

Do not place Data Platform Core production and Domain production on the same capacity if Data Platform Core must remain operational independently.

Why this matters

A shared capacity creates a shared risk envelope. Even if workspaces are logically separated, they still depend on the same underlying capacity behavior.


Workspace-to-capacity assignment

What it is

The assignment of workspaces to specific Fabric capacities.

Why it matters

This is the most important configuration decision in our model because it determines whether workloads share the same compute risk domain.

Recommendation

Key message

Workspace assignment is the primary mechanism used to guarantee production isolation and operational independence.


Capacity administration and reassignment governance

What it is

The set of permissions allowing administrators to manage a capacity and move workspaces into or out of it.

Why it matters

Even with a good target architecture, weak governance can reintroduce risk if workspaces are moved without control.

Recommendation

Key message

A dedicated production capacity loses most of its value if workspace assignment is not tightly governed.


Capacity sizing and scaling

What it is

The sizing of Fabric capacity and the ability to adjust it as workload volume evolves.

Why it matters

Even a well-isolated architecture can fail operationally if the capacity is persistently undersized.

Recommendation

Practical interpretation

Key message

Data Platform Core capacity sizing must prioritize service continuity over cost minimization.


Monitoring and operational visibility

What it is

The monitoring of capacity usage, saturation patterns, top consumers, and operational degradation signals.

Why it matters

Capacity governance is only effective if usage and saturation can be observed and acted upon.

Recommendation

For each production capacity, define:

Minimum baseline

Key message

Capacity monitoring must be part of normal run operations, not only incident management.


Disaster recovery

What it is

The capacity-level disaster recovery posture associated with production data continuity.

Why it matters

The Data Platform Core workspace supports bronze and silver foundations, which makes it a core dependency for downstream exposure.

Recommendation

Key message

For Data Platform Core, disaster recovery should never be left undocumented.


Data Engineering and Spark-related settings

What it is

Capacity-level settings related to Spark and Data Engineering workloads.

Why it matters

These settings are relevant if Spark-based processing is materially used in the platform.

Recommendation

Key message

This is a secondary topic unless Spark becomes a major production dependency.

Operational rules

Rule 1

Protect Data Platform Core by design.
Critical Core workloads must not depend on the same shared capacity behavior as variable Domain workloads.

Rule 2

Use isolation before optimization.
Do not try to solve structural contention only with reactive tuning.

Rule 3

Make monitoring part of standard operations.
Capacity review must be proactive and periodic.

Rule 4

Separate production from experimentation.
Development and testing workloads must not compete with critical production capacity.

Proposed architecture decision

Recommended decision

The recommended target state for our platform is:

Architecture conclusion

This is the most coherent model for a Fabric platform used primarily as a storage and exposure layer, where the Data Platform Core workspace must remain stable independently from Domain activity.