Object Storage
Store any file (1 byte to TB+) and access via HTTP/HTTPS
AWS S3
Bucket Management
# Create bucket (globally unique name)
aws s3 mb s3://my-unique-name-2024
# List buckets
aws s3 ls
# Delete empty bucket
aws s3 rb s3://my-unique-name-2024Upload/Download
# Upload single file
aws s3 cp myfile.txt s3://my-bucket/
# Upload directory
aws s3 cp mydir s3://my-bucket/ --recursive
# Download
aws s3 cp s3://my-bucket/myfile.txt ./
# Sync (like rsync)
aws s3 sync s3://my-bucket/ ./local-dirStorage Classes (Save costs by choosing right class)
| Class | Retrieval | Cost | Use Case |
|---|---|---|---|
| Standard | Instant | $0.023/GB | Frequently accessed |
| Standard-IA | Instant | $0.0125/GB | Infrequent (30 day min) |
| Glacier | Hours | $0.004/GB | Archive (90 day min) |
| Deep Archive | 12h+ | $0.00099/GB | Long-term (180 day min) |
Azure Blob Storage
# Create storage account
az storage account create --name mystorageacct --resource-group myRG
# Create container
az storage container create --name mycontainer --account-name mystorageacct
# Upload blob
az storage blob upload --file myfile.txt --container-name mycontainer --name myfile.txt --account-name mystorageacct
# Download blob
az storage blob download --file myfile.txt --container-name mycontainer --name myfile.txt --account-name mystorageacctGCP Cloud Storage
# Create bucket
gsutil mb gs://my-bucket-name
# Upload
gsutil cp myfile.txt gs://my-bucket-name/
# Download
gsutil cp gs://my-bucket-name/myfile.txt ./
# List
gsutil ls gs://my-bucket-name/File Storage
AWS EFS (Elastic File System) - NFS
Shared network filesystem (multiple EC2 instances can access)
# Create file system
aws efs create-file-system --performance-mode generalPurpose --throughput-mode bursting
# Create mount target in subnets
aws efs create-mount-target --file-system-id fs-xxx --subnet-id subnet-xxx --security-groups sg-xxx
# Mount on EC2
sudo mkdir -p /mnt/efs
sudo mount -t nfs4 -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-xxx.efs.us-east-1.amazonaws.com:/ /mnt/efsAzure File Shares - SMB/NFS
# Create file share
az storage share create --name myshare --account-name mystorageacct
# Get connection info
az storage share url --name myshare --account-name mystorageacct
# Mount on Linux (SMB)
sudo mount -t cifs //mystorageacct.file.core.windows.net/myshare /mnt/azure -o username=mystorageacct,password=keyGCP Filestore - NFS
# Create instance
gcloud filestore instances create my-nfs --tier STANDARD --file-share name=share,capacity=1TB --network name=default
# Mount on Compute Engine
sudo mount -t nfs IP:/export/share /mnt/nfsBackup and Disaster Recovery
AWS Backup
# Create backup vault
aws backup create-backup-vault --backup-vault-name myVault
# Create backup plan
aws backup create-backup-plan --backup-plan file://backup-plan.json
# Start backup
aws backup start-backup-job --backup-vault-name myVault --resource-arn arn:aws:ec2:us-east-1:xxx:volume/vol-xxx --iam-role-arn arn:aws:iam::xxx:role/backup-roleAzure Backup
# Create recovery services vault
az backup vault create --resource-group myRG --name myVault --location eastus
# Enable backup for VM
az backup protection enable-for-vm --resource-group myRG --vault-name myVault --vm myVM --policy-name DefaultPolicy
# Trigger backup
az backup protection backup-now --resource-group myRG --vault-name myVault --container-name myVM --item-name myVMVersioning and Lifecycle
AWS S3 Versioning
Keep multiple versions of objects
# Enable versioning
aws s3api put-bucket-versioning --bucket my-bucket --versioning-configuration Status=Enabled
# List all versions
aws s3api list-object-versions --bucket my-bucket
# Get specific version
aws s3api get-object --bucket my-bucket --key myfile.txt --version-id abc123 myfile-v1.txtLifecycle Policy
Automatically move objects to cheaper storage or delete them
cat > lifecycle.json << 'EOF'
{
"Rules": [
{
"Id": "archive-old-data",
"Filter": {"Prefix": "logs/"},
"Status": "Enabled",
"Transitions": [
{"Days": 30, "StorageClass": "STANDARD_IA"},
{"Days": 90, "StorageClass": "GLACIER"},
{"Days": 180, "StorageClass": "DEEP_ARCHIVE"}
],
"Expiration": {"Days": 365}
}
]
}
EOF
aws s3api put-bucket-lifecycle-configuration --bucket my-bucket --lifecycle-configuration file://lifecycle.jsonSecurity
Encryption at Rest
Data encrypted when stored
# Server-side encryption (S3 default)
aws s3 cp myfile.txt s3://my-bucket/ --sse AES256
# With customer-managed KMS key
aws s3 cp myfile.txt s3://my-bucket/ --sse aws:kms --sse-kms-key-id arn:aws:kms:...Encryption in Transit
Data encrypted while moving
# Force HTTPS only
aws s3api put-bucket-policy --bucket my-bucket --policy '{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": ["arn:aws:s3:::my-bucket/*", "arn:aws:s3:::my-bucket"],
"Condition": {"Bool": {"aws:SecureTransport": "false"}}
}]
}'Access Control
# Create public read-only bucket
aws s3api put-bucket-acl --bucket my-bucket --acl public-read
# Make object public
aws s3api put-object-acl --bucket my-bucket --key myfile.txt --acl public-read
# Better: Use bucket policy
aws s3api put-bucket-policy --bucket my-bucket --policy file://policy.jsonBest Practices
✅ Use S3 versioning for critical data ✅ Enable MFA delete for prod buckets ✅ Use lifecycle policies to manage costs ✅ Enable server-side encryption ✅ Require HTTPS (deny HTTP) ✅ Use bucket policies for access control ✅ Enable CloudTrail/audit logging ✅ Regular backups (automated)
❌ Don't make buckets public unless necessary ❌ Don't store credentials in buckets ❌ Don't forget encryption ❌ Don't neglect access logging