This page defines the capacity-level configuration options that must be evaluated for our Microsoft Fabric platform, with a focus on:
Our target operating model uses Fabric primarily as a data storage and exposure platform based on Lakehouse and Warehouse, serving both BI consumption and external data exposure. |
Fabric capacity configuration is a platform governance topic, not only an infrastructure topic.
The primary control for protecting Data Platform Core is capacity isolation.
Shared capacity between Data Platform Core and Domain production creates shared operational risk.
A dedicated capacity for Data Platform Core is the recommended baseline for this architecture.
Version | Date | Description | Contributor |
V0.1 |
| Initial document | COLOMBANI Théo |
This page defines the capacity-level configuration options that must be assessed for our Microsoft Fabric platform.
The objective is to ensure:
production stability
workload isolation
controlled scalability
governance of shared compute resources
operational independence of the Data Platform Core workspace
In our architecture, Microsoft Fabric is primarily used as a storage and exposure platform, based on Lakehouse and Warehouse, for both BI consumption and external data exposure.
Our target operating model is structured as follows:
bronze layer
silver layer
core production data preparation and controlled exposure foundation
gold layer
business-oriented and BI-ready data products
domain-level exposure for reporting and consumption
The Data Platform Core production workspace must remain operational independently from Domain workspaces, including in situations where Domain workloads generate higher or less predictable compute consumption.
Capacity design must be driven by isolation first, then by optimization.
In our context, the main purpose of capacity governance is not only to size compute correctly. It is primarily to:
protect critical Data Platform Core workloads
separate critical and non-critical workloads
reduce cross-workspace contention
create predictable operating conditions
support controlled platform growth
For our platform, capacity is an architecture boundary, not only a billing or administration object.
Used only for:
Data Platform Core bronze
Data Platform Core silver
core production ingestion / preparation / exposure foundations
Used for:
Domain gold workspaces
business-facing data products
BI-oriented workloads
potentially more variable usage patterns
Used for:
development
testing
experimentation
validation before production promotion
Do not place Data Platform Core production and Domain production on the same capacity if Data Platform Core production must remain operational independently.
A shared capacity creates a shared risk envelope. Even if workspaces are logically separated, they still depend on the same underlying capacity behavior.
The assignment of workspaces to specific Fabric capacities.
This is the most important configuration decision in our model because it determines whether workloads share the same compute risk domain.
assign Data Platform Core production to a dedicated capacity
assign Domain production to a separate capacity whenever possible
isolate non-production from all production capacities
avoid mixing critical platform workloads with variable business workloads
Recommendation
Workspace assignment is the primary mechanism used to guarantee production isolation and operational independence.
The set of permissions allowing administrators to manage a capacity and move workspaces into or out of it.
Even with a good target architecture, weak governance can reintroduce risk if workspaces are moved without control.
restrict capacity admin rights to the central Data Platform or IT team
restrict workspace reassignment rights on critical capacities
require formal approval for any workspace added to the Data Platform Core uction capacity
prevent self-service reassignment into critical production capacity
Warning
A dedicated production capacity loses most of its value if workspace assignment is not tightly governed.
A protection mechanism used to manage overload situations and reduce the impact of excessive background activity on a capacity.
It can help protect shared capacities, especially where Domain workspaces may generate bursty or uneven usage patterns.
consider enabling surge protection on shared Domain production capacities
use it as a protection layer for variable workloads
do not rely on it as the sole protection for Data Platform Core
Surge protection is a supporting control, not a substitute for proper isolation.
Recommendation
Use surge protection on shared capacities.
Do not use it as a replacement for dedicated capacity when a workspace is mission-critical.
The sizing of Fabric capacity and the ability to adjust it as workload volume evolves.
Even a well-isolated architecture can fail operationally if the capacity is persistently undersized.
size Data Platform Core with stability and operational headroom in mind
review Domain production more frequently, as usage can be less predictable
use monitoring trends to drive scaling decisions
avoid reactive resizing without understanding the underlying workload pattern
Data Platform Core should be sized for continuity first
Domain capacities can be managed more elastically
Decision
Data Platform Core capacity sizing must prioritize service continuity over cost minimization.
A mechanism that allows excess usage beyond the purchased capacity threshold, subject to billing and governance.
It can reduce the risk of operational disruption during rare peaks.
consider enabling overage for Data Platform Core only with explicit financial approval
define a capped and governed usage threshold
treat overage as a resilience mechanism, not a normal operating model
Overage is a safety net, not a sizing strategy.
Warning
Do not use overage to compensate for structural under-sizing.
The monitoring of capacity usage, saturation patterns, top consumers, and operational degradation signals.
Capacity governance is only effective if usage and saturation can be observed and acted upon.
For each production capacity, define:
monitoring owner
review cadence
alert thresholds
escalation path
expected remediation actions
monitor recurring peaks
identify top consuming workspaces and items
review saturation or degradation patterns
correlate operational issues with refresh, ingestion, or usage spikes
Recommendation
Capacity monitoring must be part of normal run operations, not only incident management.
The capacity-level disaster recovery posture associated with production data continuity.
The Data Platform Core workspace supports bronze and silver foundations, which makes it a core dependency for downstream exposure.
perform an explicit DR assessment for Data Platform Core
document whether DR is enabled or not
document expected recovery assumptions and limitations
ensure this is an explicit architecture decision
For Data Platform Core , DR should never be left undocumented.
Decision
Disaster recovery for Data Platform Core must be assessed explicitly and recorded as an approved architecture choice.
The definition of who is informed when capacity issues occur and how operational response is triggered.
Without alert ownership, capacity incidents tend to be handled too late or inconsistently.
Define:
alert recipients
severity levels
response expectations
operational communication path
Recommendation
Every production capacity must have a clearly assigned operational owner and alerting path.
Capacity-level settings related to Spark and Data Engineering workloads.
These settings are relevant if Spark-based processing is materially used in the IT workspace.
keep Spark governance centralized
avoid uncontrolled compute sprawl
document Spark rules separately if Spark is not a central workload in the platform
This is a secondary topic in our model unless Spark becomes a major production dependency.
| Setting | Data Platform Core | Domain Production | Non-Production |
|---|---|---|---|
| Dedicated capacity | Yes | Preferred | Separate |
| Shared with Data Platform Core | No | No | No |
| Workspace reassignment rights | Very restricted | Restricted | Controlled |
| Surge protection | Optional complement | Recommended | Optional |
| Capacity overage | Optional, capped | Optional, capped | Usually not required |
| Monitoring | Mandatory | Mandatory | Recommended |
| DR assessment | Mandatory | Case by case | Not priority |
| Spark governance | Case by case | Case by case | Flexible |
| Scaling review cadence | Regular | Regular | Periodic |
Protect Data Platform Core by design.
Critical IT workloads must not depend on the same shared capacity behavior as variable domain workloads.
Use isolation before optimization.
Do not try to solve structural contention only with reactive tuning or protection features.
Treat overage as an exception mechanism.
It may improve resilience, but it must not become the default operating mode.
Make monitoring part of standard operations.
Capacity review must be proactive and periodic.
Separate production from experimentation.
Development and testing workloads must not compete with critical production capacity.
The recommended target state for our platform is:
one dedicated Fabric capacity for Data Platform Core
one separate Fabric capacity for Domain production
one separate non-production capacity
centralized control of workspace assignment
standardized monitoring and alerting
optional capped overage for resilience
explicit DR assessment for Data Platform Core
This is the most coherent model for a Fabric platform used primarily as a storage and exposure layer, where the Data Platform Core workspace must remain stable independently from Domain activity.
Has a dedicated capacity been confirmed for Data Platform Core ?
Has Domain production been isolated from Data Platform Core ?
Has non-production been separated from production capacities?
Have capacity admin roles been limited to the central platform team?
Have workspace reassignment rights been formally governed?
Has surge protection been evaluated for shared Domain capacities?
Has capacity overage been evaluated and financially approved where relevant?
Has a monitoring owner been assigned for each production capacity?
Have alert thresholds and escalation paths been defined?
Has disaster recovery been explicitly assessed for Data Platform Core ?
Have Spark-related settings been reviewed, if applicable?
Has the target capacity model been approved as part of platform governance?