EC2 Placement Groups: When to Use Cluster, Spread, or Partition
Learn the differences between AWS EC2 placement group strategies. Choose Cluster for HPC, Spread for fault isolation, or Partition for distributed systems like Hadoop and Kafka.
Related Exam Domains
- Design High-Performing Architectures
- Design Resilient Architectures
Key Takeaway
Cluster is for low-latency HPC workloads, Spread isolates individual instance failures for critical applications, and Partition is designed for large-scale distributed systems like Hadoop and Kafka. All placement groups are free to use.
Exam Tip
Exam Essential: "HPC/Big Data with low latency?" → Cluster. "High availability, fault isolation?" → Spread. "Hadoop, Cassandra, Kafka?" → Partition.
What Are Placement Groups?
Placement Groups control the physical placement of EC2 instances to optimize for specific workload requirements.
┌─────────────────────────────────────────────────────────┐
│ Placement Group Purpose │
├─────────────────────────────────────────────────────────┤
│ │
│ Default Placement (No Placement Group): │
│ AWS places instances based on available capacity │
│ │
│ With Placement Groups: │
│ ├── Cluster: Same rack → Low latency │
│ ├── Spread: Different hardware → Fault isolation │
│ └── Partition: Partition-level separation → Distributed│
│ │
│ Cost: FREE (no additional charges) │
└─────────────────────────────────────────────────────────┘
Constraints
| Constraint | Description |
|---|---|
| Launch time | Must specify placement group at instance launch |
| No movement | Cannot move running instances to a placement group |
| Instance types | Same instance type recommended (Cluster) |
| Region/AZ | Cluster is single-AZ only; Spread/Partition can span AZs |
Cluster Placement Groups
Concept
Cluster placement groups pack instances close together on the same rack within a single AZ for minimum latency and maximum network throughput.
┌─────────────────────────────────────────────────────────┐
│ Cluster Placement Group │
├─────────────────────────────────────────────────────────┤
│ │
│ Single AZ / Single Rack │
│ ┌─────────────────────────────────────────┐ │
│ │ ┌────┐ ┌────┐ ┌────┐ ┌────┐ ┌────┐ │ │
│ │ │EC2 │ │EC2 │ │EC2 │ │EC2 │ │EC2 │ │ │
│ │ │ 1 │ │ 2 │ │ 3 │ │ 4 │ │ 5 │ │ │
│ │ └────┘ └────┘ └────┘ └────┘ └────┘ │ │
│ │ Packed on same rack │ │
│ └─────────────────────────────────────────┘ │
│ │
│ Pros: 10-100 Gbps network, microsecond latency │
│ Cons: Rack failure affects all instances │
└─────────────────────────────────────────────────────────┘
Use Cases
| Use Case | Reason |
|---|---|
| HPC (High Performance Computing) | Fast inter-node communication |
| Big Data Analytics | Large data processing |
| MPI Workloads | Minimize message passing latency |
| Financial Trading | Microsecond-level latency matters |
Limitations
- Single AZ only
- No instance limit, but capacity errors possible
- Same instance type recommended
Exam Tip
Exam Point: Cluster placement groups work in a single AZ only. For high availability, choose Spread or Partition.
Spread Placement Groups
Concept
Spread placement groups distribute instances across different hardware (racks) to isolate individual instance failures.
┌─────────────────────────────────────────────────────────┐
│ Spread Placement Group │
├─────────────────────────────────────────────────────────┤
│ │
│ AZ-a AZ-b AZ-c │
│ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Rack1│ │Rack3│ │Rack5│ │
│ │┌───┐│ │┌───┐│ │┌───┐│ │
│ ││EC2││ ││EC2││ ││EC2││ │
│ │└───┘│ │└───┘│ │└───┘│ │
│ └─────┘ └─────┘ └─────┘ │
│ ┌─────┐ ┌─────┐ ┌─────┐ │
│ │Rack2│ │Rack4│ │Rack6│ │
│ │┌───┐│ │┌───┐│ │┌───┐│ │
│ ││EC2││ ││EC2││ ││EC2││ │
│ │└───┘│ │└───┘│ │└───┘│ │
│ └─────┘ └─────┘ └─────┘ │
│ │
│ Feature: Each instance on different rack │
│ Limit: Max 7 instances per AZ │
└─────────────────────────────────────────────────────────┘
Limitations
| Limit | Description |
|---|---|
| Instances per AZ | Maximum 7 |
| Total instances | Number of AZs × 7 (e.g., 4 AZs = 28) |
Use Cases
| Use Case | Reason |
|---|---|
| Critical applications | Isolate individual instance failures |
| Small-scale high availability | Minimize simultaneous failures |
| Database replication | Separate primary/replica |
Partition Placement Groups
Concept
Partition placement groups divide instances into logical partitions, where each partition is placed on different racks. Ideal for large-scale distributed workloads.
┌─────────────────────────────────────────────────────────┐
│ Partition Placement Group │
├─────────────────────────────────────────────────────────┤
│ │
│ AZ-a │
│ ┌─────────────────┐ ┌─────────────────┐ │
│ │ Partition 1 │ │ Partition 2 │ │
│ │ (Rack A) │ │ (Rack B) │ │
│ │ ┌───┐ ┌───┐ │ │ ┌───┐ ┌───┐ │ │
│ │ │EC2│ │EC2│ │ │ │EC2│ │EC2│ │ │
│ │ └───┘ └───┘ │ │ └───┘ └───┘ │ │
│ │ ┌───┐ ┌───┐ │ │ ┌───┐ ┌───┐ │ │
│ │ │EC2│ │EC2│ │ │ │EC2│ │EC2│ │ │
│ │ └───┘ └───┘ │ │ └───┘ └───┘ │ │
│ └─────────────────┘ └─────────────────┘ │
│ │
│ Features: │
│ - No rack sharing between partitions │
│ - Multiple instances per partition │
│ - Max 7 partitions per AZ │
│ - Max 100 instances total │
└─────────────────────────────────────────────────────────┘
Limitations
| Limit | Description |
|---|---|
| Partitions per AZ | Maximum 7 |
| Total instances | Maximum 100 |
| Instances per partition | No limit |
Use Cases
| Use Case | Reason |
|---|---|
| Hadoop/HDFS | Replicate data blocks per partition |
| Cassandra | Distribute ring structure nodes |
| Kafka | Separate broker partitions |
| HBase | Isolate RegionServers |
Partition Metadata
Partition placement groups provide partition information via the metadata service:
# Get partition number for the instance
curl http://169.254.169.254/latest/meta-data/placement/partition-number
Applications can use this for rack-aware replication.
Cluster vs Spread vs Partition Comparison
| Feature | Cluster | Spread | Partition |
|---|---|---|---|
| Purpose | Low latency | Fault isolation | Large-scale distributed |
| AZ Support | Single AZ | Multi-AZ | Multi-AZ |
| Instance Limit | None (capacity permitting) | 7 per AZ | 100 total |
| Rack Placement | Same rack | All different racks | Different rack per partition |
| Failure Scope | All affected | 1 instance affected | Partition-level |
| Cost | Free | Free | Free |
Selection Guide
┌─────────────────────────────────────────────────────────┐
│ Placement Group Selection │
├─────────────────────────────────────────────────────────┤
│ │
│ Q1: Is network performance the top priority? │
│ (HPC, Big Data, Finance) │
│ └── Yes → Cluster │
│ │
│ Q2: Small scale with individual fault isolation? │
│ (Critical apps, 7 or fewer instances) │
│ └── Yes → Spread │
│ │
│ Q3: Large-scale distributed system? │
│ (Hadoop, Cassandra, Kafka) │
│ └── Yes → Partition │
│ │
│ Q4: No special requirements? │
│ └── No placement group needed │
└─────────────────────────────────────────────────────────┘
SAA-C03 Exam Focus Points
- ✅ Scenario-based selection: HPC → Cluster, Small HA → Spread, Hadoop/Kafka → Partition
- ✅ Spread limit: Max 7 instances per AZ
- ✅ Cluster constraint: Single AZ only
- ✅ Partition limit: Max 100 instances, 7 partitions per AZ
- ✅ Cost: All placement groups are FREE
Exam Tip
Sample Exam Question: "A company needs to deploy a Hadoop cluster on EC2 with fault tolerance at the partition level. Which placement group strategy should they use?" → Answer: Partition (Hadoop requires partition-level fault isolation for rack-aware replication)
Frequently Asked Questions
Q: Do placement groups cost extra?
No. Placement groups are free. You only pay for the regular EC2 instance costs.
Q: Can I move a running instance to a placement group?
No. You must stop the instance → create an AMI → launch a new instance in the placement group.
Q: What if I get a capacity error in a Cluster placement group?
- Stop and restart the instances
- Try a different instance type
- Create a new Cluster placement group
- Request capacity reservation from AWS Support
Q: When should I choose Spread vs Partition?
- 7 or fewer instances + individual fault isolation → Spread
- Large-scale + group-level fault isolation → Partition
Q: What happens if I don't specify a partition in Partition placement group?
AWS automatically assigns a partition. Specify PartitionNumber to place instances in a specific partition.