SAABlog
StorageIntermediate

S3 Transfer Acceleration and Multipart Upload: Optimizing Large File Uploads

Learn how to improve large file transfer speeds by up to 61% using S3 Transfer Acceleration and Multipart Upload.

PHILOLAMB-Updated: January 31, 2026
S3Transfer AccelerationMultipart UploadFile TransferPerformance Optimization

Related Exam Domains

  • Domain 3: Design High-Performing Architectures

Key Takeaway

For files over 100MB, use Multipart Upload for parallel transfer. For geographically distant uploads, use Transfer Acceleration to leverage Edge Locations. Using both together can reduce upload time by up to 61%.

Exam Tip

Exam Essential: "Large file + long-distance transfer = Transfer Acceleration + Multipart Upload combination"

S3 File Transfer Limitations

Standard S3 uploads have several limitations:

LimitationValue
Single PUT max size5GB
Maximum object size5TB
Standard upload reliabilityMust restart from beginning on network failure

Problem Scenarios:

  • Cannot upload files over 5GB with single PUT
  • Increased latency for long-distance transfers
  • Large file upload failures on unstable networks

What is Multipart Upload?

A feature that splits large files into smaller parts for parallel upload.

How It Works

┌─────────────────────────────────────────────────────────────┐
│                    5GB File                                  │
├───────────┬───────────┬───────────┬───────────┬────────────┤
│  Part 1   │  Part 2   │  Part 3   │  Part 4   │   Part 5   │
│  (1GB)    │  (1GB)    │  (1GB)    │  (1GB)    │   (1GB)    │
└─────┬─────┴─────┬─────┴─────┬─────┴─────┬─────┴──────┬─────┘
      │           │           │           │            │
      ▼           ▼           ▼           ▼            ▼
   [Thread 1] [Thread 2] [Thread 3] [Thread 4]  [Thread 5]
      │           │           │           │            │
      └───────────┴───────────┼───────────┴────────────┘
                              ▼
                        [S3 Bucket]
                    (Auto-assembled)

Multipart Upload Benefits

  1. Parallel Processing: Upload multiple parts simultaneously
  2. Retry Efficiency: Only re-upload failed parts
  3. Over 5GB Support: Upload up to 5TB
  4. Pause/Resume: Stop and continue later

Part Size Limits

ItemValue
Minimum part size5MB (except last part)
Maximum part size5GB
Maximum parts10,000
Recommended part size25-100MB

Exam Tip

Calculation Practice: How many parts minimum are needed to upload a 5TB file? → 5TB ÷ 5GB (max part) = 1,000 parts (within 10,000 limit)

What is Transfer Acceleration?

A feature that accelerates transfer speeds to S3 through AWS Edge Locations.

How It Works

[User: Seoul]                              [S3 Bucket: Virginia]
     │                                           │
     │  ← Regular transfer: via public internet → │
     │     (high latency, unstable)               │
     │                                           │
     ▼                                           ▼
┌────────────┐     AWS Backbone Network      ┌────────────┐
│ Edge Loc.  │ ════════════════════════════> │   S3       │
│ (Seoul)    │  (Optimized private path)     │ (Virginia) │
└────────────┘                               └────────────┘

Transfer Acceleration Speed Improvements

DistanceRegular UploadAccelerationImprovement
Same regionFastSimilar or slower0%
IntercontinentalMediumFast50-100%
Opposite side of globeSlowVery fast200-500%

Exam Tip

Note: Transfer Acceleration is effective for long-distance transfers. It may actually be slower within the same region.

Using Both Features Together

Combining Transfer Acceleration and Multipart Upload enables up to 61% upload time reduction.

AWS CLI Usage Example

# 1. Enable Transfer Acceleration on bucket
aws s3api put-bucket-accelerate-configuration \
    --bucket my-bucket \
    --accelerate-configuration Status=Enabled

# 2. Upload using Accelerate endpoint
aws s3 cp large-file.zip s3://my-bucket/ \
    --endpoint-url https://s3-accelerate.amazonaws.com

Python boto3 Example

import boto3
from boto3.s3.transfer import TransferConfig
from botocore.config import Config

# Enable Transfer Acceleration
s3_config = Config(
    s3={'use_accelerate_endpoint': True}
)

# Multipart settings
transfer_config = TransferConfig(
    multipart_threshold=100 * 1024 * 1024,  # Multipart for 100MB+
    max_concurrency=10,                      # 10 concurrent uploads
    multipart_chunksize=25 * 1024 * 1024,   # 25MB chunks
    use_threads=True
)

s3_client = boto3.client('s3', config=s3_config)

# Upload file
s3_client.upload_file(
    'large-file.zip',
    'my-bucket',
    'large-file.zip',
    Config=transfer_config
)

When to Use What?

Multipart Upload Only

  • Large files over 100MB
  • Same-region uploads
  • Unstable network environments
  • Files over 5GB (required)

Transfer Acceleration Only

  • Small file long-distance transfers
  • Global user uploads
  • Need consistent transfer speeds

Use Both

  • Large files + long-distance transfers (optimal)
  • Global media uploads
  • Intercontinental backup/replication

Transfer Acceleration Pricing

Data PathCost (per GB)
Edge → S3 (US, Europe, Japan)$0.04
Edge → S3 (Other regions)$0.08
S3 → Edge (Download)$0.04-$0.08

Cost Optimization Tips:

  • No charge if there's no speed improvement
  • Use speed comparison test tool beforehand

Speed Test Method

AWS provides an S3 Transfer Acceleration Speed Comparison tool:

https://s3-accelerate-speedtest.s3-accelerate.amazonaws.com/en/accelerate-speed-comparsion.html

Use this tool to compare regular upload vs Acceleration speeds by region.

Limitations and Considerations

Transfer Acceleration Limitations

  1. Bucket Naming Rules

    • Must be DNS-compatible
    • Cannot contain periods (.)
  2. Activation Time

    • Wait up to 20 minutes after configuration
  3. Billing Conditions

    • Only charged when speed improves

Multipart Upload Considerations

  1. Clean Up Incomplete Uploads
    • Failed multipart uploads incur storage costs
    • Recommend setting Lifecycle policy for auto-cleanup
{
  "Rules": [{
    "ID": "AbortIncompleteMultipartUpload",
    "Status": "Enabled",
    "AbortIncompleteMultipartUpload": {
      "DaysAfterInitiation": 7
    }
  }]
}

SAA-C03 Exam Focus Points

  1. Files over 5GB: "Must use Multipart Upload"
  2. Long-distance transfer optimization: "Transfer Acceleration + Edge Location"
  3. Cost vs Speed: "Transfer Acceleration not needed for same region"
  4. Failure Recovery: "Multipart only retransmits failed parts"
  5. Incomplete Uploads: "Need Lifecycle policy for cleanup"

Exam Tip

Sample Exam Question: "Global users upload videos (2GB) to S3 (us-east-1). Asian users have slow upload speeds. What is the most cost-effective solution?" → Answer: Enable Transfer Acceleration (leverage Edge Locations)

Frequently Asked Questions (FAQ)

Q: Is Multipart Upload applied automatically?

AWS CLI and SDK automatically apply Multipart Upload for files above a certain size. CLI uses 8MB, boto3 uses 8MB as the default threshold.

Q: Does Transfer Acceleration add costs, so should I always use it?

No. It's unnecessary for same-region transfers or already fast network environments. Test with the speed comparison tool before deciding.

Q: What happens if Multipart Upload fails?

Uploaded parts remain in S3 and incur storage costs. Set an AbortIncompleteMultipartUpload Lifecycle policy for automatic cleanup.

Q: Does Transfer Acceleration apply to downloads too?

Yes. Both uploads and downloads are accelerated. Use the same accelerate endpoint.

Q: Does Multipart Upload work with versioning-enabled buckets?

Yes. Objects uploaded via Multipart Upload are also versioned. Completed uploads are saved as new versions.

References