SAABlog
ComputeIntermediate

EC2 Placement Groups: When to Use Cluster, Spread, or Partition

Learn the differences between AWS EC2 placement group strategies. Choose Cluster for HPC, Spread for fault isolation, or Partition for distributed systems like Hadoop and Kafka.

PHILOLAMB-Updated: January 31, 2026
EC2Placement GroupsClusterSpreadPartitionHPC

Related Exam Domains

  • Design High-Performing Architectures
  • Design Resilient Architectures

Key Takeaway

Cluster is for low-latency HPC workloads, Spread isolates individual instance failures for critical applications, and Partition is designed for large-scale distributed systems like Hadoop and Kafka. All placement groups are free to use.

Exam Tip

Exam Essential: "HPC/Big Data with low latency?" → Cluster. "High availability, fault isolation?" → Spread. "Hadoop, Cassandra, Kafka?" → Partition.

What Are Placement Groups?

Placement Groups control the physical placement of EC2 instances to optimize for specific workload requirements.

┌─────────────────────────────────────────────────────────┐
│              Placement Group Purpose                     │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  Default Placement (No Placement Group):                │
│  AWS places instances based on available capacity       │
│                                                         │
│  With Placement Groups:                                 │
│  ├── Cluster: Same rack → Low latency                  │
│  ├── Spread: Different hardware → Fault isolation      │
│  └── Partition: Partition-level separation → Distributed│
│                                                         │
│  Cost: FREE (no additional charges)                     │
└─────────────────────────────────────────────────────────┘

Constraints

ConstraintDescription
Launch timeMust specify placement group at instance launch
No movementCannot move running instances to a placement group
Instance typesSame instance type recommended (Cluster)
Region/AZCluster is single-AZ only; Spread/Partition can span AZs

Cluster Placement Groups

Concept

Cluster placement groups pack instances close together on the same rack within a single AZ for minimum latency and maximum network throughput.

┌─────────────────────────────────────────────────────────┐
│               Cluster Placement Group                    │
├─────────────────────────────────────────────────────────┤
│                                                         │
│   Single AZ / Single Rack                               │
│   ┌─────────────────────────────────────────┐          │
│   │  ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐   │          │
│   │  │EC2 │ │EC2 │ │EC2 │ │EC2 │ │EC2 │   │          │
│   │  │ 1  │ │ 2  │ │ 3  │ │ 4  │ │ 5  │   │          │
│   │  └────┘ └────┘ └────┘ └────┘ └────┘   │          │
│   │           Packed on same rack           │          │
│   └─────────────────────────────────────────┘          │
│                                                         │
│   Pros: 10-100 Gbps network, microsecond latency       │
│   Cons: Rack failure affects all instances              │
└─────────────────────────────────────────────────────────┘

Use Cases

Use CaseReason
HPC (High Performance Computing)Fast inter-node communication
Big Data AnalyticsLarge data processing
MPI WorkloadsMinimize message passing latency
Financial TradingMicrosecond-level latency matters

Limitations

  • Single AZ only
  • No instance limit, but capacity errors possible
  • Same instance type recommended

Exam Tip

Exam Point: Cluster placement groups work in a single AZ only. For high availability, choose Spread or Partition.

Spread Placement Groups

Concept

Spread placement groups distribute instances across different hardware (racks) to isolate individual instance failures.

┌─────────────────────────────────────────────────────────┐
│                Spread Placement Group                    │
├─────────────────────────────────────────────────────────┤
│                                                         │
│   AZ-a                AZ-b                AZ-c          │
│   ┌─────┐            ┌─────┐            ┌─────┐        │
│   │Rack1│            │Rack3│            │Rack5│        │
│   │┌───┐│            │┌───┐│            │┌───┐│        │
│   ││EC2││            ││EC2││            ││EC2││        │
│   │└───┘│            │└───┘│            │└───┘│        │
│   └─────┘            └─────┘            └─────┘        │
│   ┌─────┐            ┌─────┐            ┌─────┐        │
│   │Rack2│            │Rack4│            │Rack6│        │
│   │┌───┐│            │┌───┐│            │┌───┐│        │
│   ││EC2││            ││EC2││            ││EC2││        │
│   │└───┘│            │└───┘│            │└───┘│        │
│   └─────┘            └─────┘            └─────┘        │
│                                                         │
│   Feature: Each instance on different rack              │
│   Limit: Max 7 instances per AZ                        │
└─────────────────────────────────────────────────────────┘

Limitations

LimitDescription
Instances per AZMaximum 7
Total instancesNumber of AZs × 7 (e.g., 4 AZs = 28)

Use Cases

Use CaseReason
Critical applicationsIsolate individual instance failures
Small-scale high availabilityMinimize simultaneous failures
Database replicationSeparate primary/replica

Partition Placement Groups

Concept

Partition placement groups divide instances into logical partitions, where each partition is placed on different racks. Ideal for large-scale distributed workloads.

┌─────────────────────────────────────────────────────────┐
│              Partition Placement Group                   │
├─────────────────────────────────────────────────────────┤
│                                                         │
│   AZ-a                                                  │
│   ┌─────────────────┐  ┌─────────────────┐             │
│   │   Partition 1   │  │   Partition 2   │             │
│   │   (Rack A)      │  │   (Rack B)      │             │
│   │  ┌───┐ ┌───┐   │  │  ┌───┐ ┌───┐   │             │
│   │  │EC2│ │EC2│   │  │  │EC2│ │EC2│   │             │
│   │  └───┘ └───┘   │  │  └───┘ └───┘   │             │
│   │  ┌───┐ ┌───┐   │  │  ┌───┐ ┌───┐   │             │
│   │  │EC2│ │EC2│   │  │  │EC2│ │EC2│   │             │
│   │  └───┘ └───┘   │  │  └───┘ └───┘   │             │
│   └─────────────────┘  └─────────────────┘             │
│                                                         │
│   Features:                                             │
│   - No rack sharing between partitions                 │
│   - Multiple instances per partition                   │
│   - Max 7 partitions per AZ                           │
│   - Max 100 instances total                           │
└─────────────────────────────────────────────────────────┘

Limitations

LimitDescription
Partitions per AZMaximum 7
Total instancesMaximum 100
Instances per partitionNo limit

Use Cases

Use CaseReason
Hadoop/HDFSReplicate data blocks per partition
CassandraDistribute ring structure nodes
KafkaSeparate broker partitions
HBaseIsolate RegionServers

Partition Metadata

Partition placement groups provide partition information via the metadata service:

# Get partition number for the instance
curl http://169.254.169.254/latest/meta-data/placement/partition-number

Applications can use this for rack-aware replication.

Cluster vs Spread vs Partition Comparison

FeatureClusterSpreadPartition
PurposeLow latencyFault isolationLarge-scale distributed
AZ SupportSingle AZMulti-AZMulti-AZ
Instance LimitNone (capacity permitting)7 per AZ100 total
Rack PlacementSame rackAll different racksDifferent rack per partition
Failure ScopeAll affected1 instance affectedPartition-level
CostFreeFreeFree

Selection Guide

┌─────────────────────────────────────────────────────────┐
│                  Placement Group Selection               │
├─────────────────────────────────────────────────────────┤
│                                                         │
│  Q1: Is network performance the top priority?           │
│      (HPC, Big Data, Finance)                           │
│      └── Yes → Cluster                                 │
│                                                         │
│  Q2: Small scale with individual fault isolation?       │
│      (Critical apps, 7 or fewer instances)              │
│      └── Yes → Spread                                  │
│                                                         │
│  Q3: Large-scale distributed system?                    │
│      (Hadoop, Cassandra, Kafka)                         │
│      └── Yes → Partition                               │
│                                                         │
│  Q4: No special requirements?                           │
│      └── No placement group needed                     │
└─────────────────────────────────────────────────────────┘

SAA-C03 Exam Focus Points

  1. Scenario-based selection: HPC → Cluster, Small HA → Spread, Hadoop/Kafka → Partition
  2. Spread limit: Max 7 instances per AZ
  3. Cluster constraint: Single AZ only
  4. Partition limit: Max 100 instances, 7 partitions per AZ
  5. Cost: All placement groups are FREE

Exam Tip

Sample Exam Question: "A company needs to deploy a Hadoop cluster on EC2 with fault tolerance at the partition level. Which placement group strategy should they use?" → Answer: Partition (Hadoop requires partition-level fault isolation for rack-aware replication)

Frequently Asked Questions

Q: Do placement groups cost extra?

No. Placement groups are free. You only pay for the regular EC2 instance costs.

Q: Can I move a running instance to a placement group?

No. You must stop the instance → create an AMI → launch a new instance in the placement group.

Q: What if I get a capacity error in a Cluster placement group?

  1. Stop and restart the instances
  2. Try a different instance type
  3. Create a new Cluster placement group
  4. Request capacity reservation from AWS Support

Q: When should I choose Spread vs Partition?

  • 7 or fewer instances + individual fault isolation → Spread
  • Large-scale + group-level fault isolation → Partition

Q: What happens if I don't specify a partition in Partition placement group?

AWS automatically assigns a partition. Specify PartitionNumber to place instances in a specific partition.

References