S3 Transfer Acceleration and Multipart Upload: Optimizing Large File Uploads
Learn how to improve large file transfer speeds by up to 61% using S3 Transfer Acceleration and Multipart Upload.
Related Exam Domains
- Domain 3: Design High-Performing Architectures
Key Takeaway
For files over 100MB, use Multipart Upload for parallel transfer. For geographically distant uploads, use Transfer Acceleration to leverage Edge Locations. Using both together can reduce upload time by up to 61%.
Exam Tip
Exam Essential: "Large file + long-distance transfer = Transfer Acceleration + Multipart Upload combination"
S3 File Transfer Limitations
Standard S3 uploads have several limitations:
| Limitation | Value |
|---|---|
| Single PUT max size | 5GB |
| Maximum object size | 5TB |
| Standard upload reliability | Must restart from beginning on network failure |
Problem Scenarios:
- Cannot upload files over 5GB with single PUT
- Increased latency for long-distance transfers
- Large file upload failures on unstable networks
What is Multipart Upload?
A feature that splits large files into smaller parts for parallel upload.
How It Works
┌─────────────────────────────────────────────────────────────┐
│ 5GB File │
├───────────┬───────────┬───────────┬───────────┬────────────┤
│ Part 1 │ Part 2 │ Part 3 │ Part 4 │ Part 5 │
│ (1GB) │ (1GB) │ (1GB) │ (1GB) │ (1GB) │
└─────┬─────┴─────┬─────┴─────┬─────┴─────┬─────┴──────┬─────┘
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
[Thread 1] [Thread 2] [Thread 3] [Thread 4] [Thread 5]
│ │ │ │ │
└───────────┴───────────┼───────────┴────────────┘
▼
[S3 Bucket]
(Auto-assembled)
Multipart Upload Benefits
- Parallel Processing: Upload multiple parts simultaneously
- Retry Efficiency: Only re-upload failed parts
- Over 5GB Support: Upload up to 5TB
- Pause/Resume: Stop and continue later
Part Size Limits
| Item | Value |
|---|---|
| Minimum part size | 5MB (except last part) |
| Maximum part size | 5GB |
| Maximum parts | 10,000 |
| Recommended part size | 25-100MB |
Exam Tip
Calculation Practice: How many parts minimum are needed to upload a 5TB file? → 5TB ÷ 5GB (max part) = 1,000 parts (within 10,000 limit)
What is Transfer Acceleration?
A feature that accelerates transfer speeds to S3 through AWS Edge Locations.
How It Works
[User: Seoul] [S3 Bucket: Virginia]
│ │
│ ← Regular transfer: via public internet → │
│ (high latency, unstable) │
│ │
▼ ▼
┌────────────┐ AWS Backbone Network ┌────────────┐
│ Edge Loc. │ ════════════════════════════> │ S3 │
│ (Seoul) │ (Optimized private path) │ (Virginia) │
└────────────┘ └────────────┘
Transfer Acceleration Speed Improvements
| Distance | Regular Upload | Acceleration | Improvement |
|---|---|---|---|
| Same region | Fast | Similar or slower | 0% |
| Intercontinental | Medium | Fast | 50-100% |
| Opposite side of globe | Slow | Very fast | 200-500% |
Exam Tip
Note: Transfer Acceleration is effective for long-distance transfers. It may actually be slower within the same region.
Using Both Features Together
Combining Transfer Acceleration and Multipart Upload enables up to 61% upload time reduction.
AWS CLI Usage Example
# 1. Enable Transfer Acceleration on bucket
aws s3api put-bucket-accelerate-configuration \
--bucket my-bucket \
--accelerate-configuration Status=Enabled
# 2. Upload using Accelerate endpoint
aws s3 cp large-file.zip s3://my-bucket/ \
--endpoint-url https://s3-accelerate.amazonaws.com
Python boto3 Example
import boto3
from boto3.s3.transfer import TransferConfig
from botocore.config import Config
# Enable Transfer Acceleration
s3_config = Config(
s3={'use_accelerate_endpoint': True}
)
# Multipart settings
transfer_config = TransferConfig(
multipart_threshold=100 * 1024 * 1024, # Multipart for 100MB+
max_concurrency=10, # 10 concurrent uploads
multipart_chunksize=25 * 1024 * 1024, # 25MB chunks
use_threads=True
)
s3_client = boto3.client('s3', config=s3_config)
# Upload file
s3_client.upload_file(
'large-file.zip',
'my-bucket',
'large-file.zip',
Config=transfer_config
)
When to Use What?
Multipart Upload Only
- Large files over 100MB
- Same-region uploads
- Unstable network environments
- Files over 5GB (required)
Transfer Acceleration Only
- Small file long-distance transfers
- Global user uploads
- Need consistent transfer speeds
Use Both
- Large files + long-distance transfers (optimal)
- Global media uploads
- Intercontinental backup/replication
Transfer Acceleration Pricing
| Data Path | Cost (per GB) |
|---|---|
| Edge → S3 (US, Europe, Japan) | $0.04 |
| Edge → S3 (Other regions) | $0.08 |
| S3 → Edge (Download) | $0.04-$0.08 |
Cost Optimization Tips:
- No charge if there's no speed improvement
- Use speed comparison test tool beforehand
Speed Test Method
AWS provides an S3 Transfer Acceleration Speed Comparison tool:
https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html
Use this tool to compare regular upload vs Acceleration speeds by region.
Limitations and Considerations
Transfer Acceleration Limitations
-
Bucket Naming Rules
- Must be DNS-compatible
- Cannot contain periods (.)
-
Activation Time
- Wait up to 20 minutes after configuration
-
Billing Conditions
- Only charged when speed improves
Multipart Upload Considerations
- Clean Up Incomplete Uploads
- Failed multipart uploads incur storage costs
- Recommend setting Lifecycle policy for auto-cleanup
{
"Rules": [{
"ID": "AbortIncompleteMultipartUpload",
"Status": "Enabled",
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}]
}
SAA-C03 Exam Focus Points
- ✅ Files over 5GB: "Must use Multipart Upload"
- ✅ Long-distance transfer optimization: "Transfer Acceleration + Edge Location"
- ✅ Cost vs Speed: "Transfer Acceleration not needed for same region"
- ✅ Failure Recovery: "Multipart only retransmits failed parts"
- ✅ Incomplete Uploads: "Need Lifecycle policy for cleanup"
Exam Tip
Sample Exam Question: "Global users upload videos (2GB) to S3 (us-east-1). Asian users have slow upload speeds. What is the most cost-effective solution?" → Answer: Enable Transfer Acceleration (leverage Edge Locations)
Frequently Asked Questions (FAQ)
Q: Is Multipart Upload applied automatically?
AWS CLI and SDK automatically apply Multipart Upload for files above a certain size. CLI uses 8MB, boto3 uses 8MB as the default threshold.
Q: Does Transfer Acceleration add costs, so should I always use it?
No. It's unnecessary for same-region transfers or already fast network environments. Test with the speed comparison tool before deciding.
Q: What happens if Multipart Upload fails?
Uploaded parts remain in S3 and incur storage costs. Set an AbortIncompleteMultipartUpload Lifecycle policy for automatic cleanup.
Q: Does Transfer Acceleration apply to downloads too?
Yes. Both uploads and downloads are accelerated. Use the same accelerate endpoint.
Q: Does Multipart Upload work with versioning-enabled buckets?
Yes. Objects uploaded via Multipart Upload are also versioned. Completed uploads are saved as new versions.
Related Posts
- S3 Storage Classes Complete Guide
- S3 Replication (CRR, SRR) Complete Guide
- S3 Lifecycle Policy Design