Cloudflare R2 is not hype. The egress fee really is zero.

I run a Laravel app that serves user-uploaded files. Product images, documents, exported PDFs. Nothing exotic. The kind of storage setup every SaaS developer builds and then slowly stops thinking about until the monthly AWS bill shows up.
My bill had two S3-related lines. One was for storage. Reasonable. The other was for data transfer out. That one grew every time the app got more users, because more users meant more downloads and more downloads meant more egress charges. I was being billed for the app being successful.
I moved to Cloudflare R2. The egress line is gone. Here is everything I wish I had known before starting.
Why the Egress Bill Hurts More Than You Think
AWS charges $0.023 per GB per month for S3 Standard storage in us-east-1. That is fine. At 100 GB stored that is $2.30 a month.
The other charge is the problem. Every gigabyte you serve out to the internet costs $0.09, after a free 100 GB per month allowance that is shared across all AWS services in your account. If you store 100 GB but serve 500 GB to users every month, your egress bill is around $36. Your storage bill is $2.30. The data transfer line is the real cost.
This is not a pricing accident. Once your data lives in S3, the cheapest place to serve it is CloudFront. The cheapest place to process it is Lambda. The cheapest place to query it is Athena. Every step deeper into the ecosystem makes leaving more expensive. Egress fees are the mechanism that keeps you inside.
Cloudflare R2 charges $0.015 per GB per month for storage. Data transfer out to the internet is free. Not discounted, not tiered. Free. The only charges are storage and operations. Class A operations (writes, uploads) cost $4.50 per million requests. Class B operations (reads, downloads) cost $0.36 per million. The free tier covers 10 GB of storage, 1 million Class A operations and 10 million Class B operations every month.
The Laravel Config
R2 is S3-compatible, so the setup in Laravel is short. You do not need a special package. Install the S3 Flysystem adapter if you do not already have it:
composer require league/flysystem-aws-s3-v3 "^3.0" --with-all-dependenciesLaravel’s official filesystem docs (as of Laravel 12) list Cloudflare R2 as a supported S3-compatible provider. Add a new disk to config/filesystems.php:
'r2' => [
'driver' => 's3',
'key' => env('R2_ACCESS_KEY_ID'),
'secret' => env('R2_SECRET_ACCESS_KEY'),
'region' => 'auto',
'bucket' => env('R2_BUCKET'),
'url' => env('R2_URL'),
'endpoint' => env('R2_ENDPOINT'),
'use_path_style_endpoint' => false,
'retain_visibility' => false,
'throw' => true,
],Your .env:
R2_ACCESS_KEY_ID=your-access-key
R2_SECRET_ACCESS_KEY=your-secret-key
R2_BUCKET=your-bucket-name
R2_ENDPOINT=https://<your-account-id>.r2.cloudflarestorage.com
R2_URL=https://your-custom-domain.comSet FILESYSTEM_DISK=r2 in .env and every Storage::put(), Storage::get() and Storage::url() call in your codebase keeps working with no changes to your application logic.
The Gotchas Nobody Warns You About
R2 is not a perfect clone of S3. It works for the common operations, but a few differences will catch you.
The ACL problem is the most common one.
S3 has per-object ACLs. When Flysystem copies or moves a file with a visibility setting, it tries to send an ACL header to the storage API. R2 does not support per-object ACLs. The operation either fails silently or throws an UnableToWriteFile exception depending on your version.
The fix is 'retain_visibility' => false in your disk config. That tells Flysystem not to attempt setting ACLs on copy and move operations. Without it, Storage::copy() will break.
For making files publicly accessible, you control visibility at the bucket level in the Cloudflare dashboard, not per object. Connect a custom domain to the bucket and every object in it is served publicly through that domain.
No Object Lock.
If you need WORM storage (write once, read many) for financial records, audit logs or compliance retention, R2 does not support it as of early 2026. S3 has Object Lock with both Governance and Compliance modes. Stay on S3 for that use case.
No Glacier equivalent.
R2 has Standard storage at $0.015/GB and an Infrequent Access class at $0.01/GB. There is no deep archive tier. S3 Glacier Deep Archive costs $0.00099/GB. If you are archiving data you almost never read, S3 is still the cheaper option for pure cold storage.
No native AWS service integrations.
R2 event notifications connect to Cloudflare Workers and Cloudflare Queues. They do not trigger Lambda functions or feed AWS EventBridge. If your data pipeline depends on S3 events flowing into AWS-native services, those integrations break when you move the bucket. R2 has its own event system but it is not a drop-in replacement for the AWS side.
The CRC32 issue (resolved, but worth knowing about).
In January 2025, AWS pushed aws-sdk-php version 3.337.0 which added default CRC32 checksum headers to all upload operations. R2 did not support those headers and uploads failed with a Header 'x-amz-checksum-crc32' not implemented error. Cloudflare resolved this on their end on February 3, 2025. If you are running a newer SDK version you should not hit this. If you upgrade your dependencies and suddenly see that error, check the Cloudflare R2 status page first before debugging your code.
Moving the Data
Both methods below are free to run, but you still pay R2 Class A operation costs (writes) on the receiving end. Before starting either, contact AWS Support and ask about their data transfer out credit program for customers migrating away from S3. It is not automatic, but it is worth asking about before moving anything large.
Method 1: Super Slurper (Cloudflare’s built-in tool)
Super Slurper is a one-click bulk copy tool in the Cloudflare dashboard. Go to R2 > Data Migration > Migrate files, select Amazon S3 as the source, paste your IAM credentials and bucket name and it starts copying everything in parallel using Cloudflare’s own network.
A few limitations to know before you start. Objects over 1 TB are skipped and need to be handled separately. Objects in S3 Glacier tiers (except Glacier Instant Retrieval) are skipped. ETags on migrated objects may differ from the originals because Super Slurper can split large files into multipart uploads with different chunk sizes than the original upload used. If anything in your code compares ETags before and after migration, that will break.
After the bulk copy, enable Sippy as a safety net. Sippy is a lazy proxy: when a request hits R2 for an object that did not exist yet at copy time, it pulls it from S3 and caches it. This covers anything uploaded between your Super Slurper run and your cutover date. Once Sippy stops pulling from S3, the migration is complete.
Super Slurper is the right choice when your bucket is straightforward and you do not need to filter, rename or transform objects during the move.
Method 2: Custom Go script
Use this when you want control over what gets moved, at what concurrency and with your own logging. The script reads from S3 and writes to R2 in parallel, streaming each object directly without writing to disk. Ten workers run concurrently by default. Adjust the workers constant for your network and object sizes.
package main
import (
"context"
"fmt"
"log"
"os"
"sync"
"sync/atomic"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
const workers = 10
func main() {
// S3 source credentials
s3Cfg, err := config.LoadDefaultConfig(context.Background(),
config.WithRegion(os.Getenv("S3_REGION")),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
os.Getenv("S3_ACCESS_KEY_ID"),
os.Getenv("S3_SECRET_ACCESS_KEY"),
"",
)),
)
if err != nil {
log.Fatalf("S3 config: %v", err)
}
s3Client := s3.NewFromConfig(s3Cfg)
// R2 destination credentials
// R2 only accepts "auto" as region. The real routing is handled
// by the account-scoped endpoint URL.
r2Cfg, err := config.LoadDefaultConfig(context.Background(),
config.WithRegion("auto"),
config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(
os.Getenv("R2_ACCESS_KEY_ID"),
os.Getenv("R2_SECRET_ACCESS_KEY"),
"",
)),
)
if err != nil {
log.Fatalf("R2 config: %v", err)
}
r2Client := s3.NewFromConfig(r2Cfg, func(o *s3.Options) {
o.BaseEndpoint = aws.String(
fmt.Sprintf("https://%s.r2.cloudflarestorage.com", os.Getenv("R2_ACCOUNT_ID")),
)
})
srcBucket := os.Getenv("S3_BUCKET")
dstBucket := os.Getenv("R2_BUCKET")
ctx := context.Background()
keys := make(chan string, workers*2)
var wg sync.WaitGroup
var copied, failed int64
for range workers {
wg.Add(1)
go func() {
defer wg.Done()
for key := range keys {
if err := migrate(ctx, s3Client, r2Client, srcBucket, dstBucket, key); err != nil {
log.Printf("FAIL %s: %v", key, err)
atomic.AddInt64(&failed, 1)
} else {
n := atomic.AddInt64(&copied, 1)
if n%100 == 0 {
fmt.Printf("copied %d objects\n", n)
}
}
}
}()
}
// Paginate the source bucket and push keys to workers.
// ListObjectsV2 returns up to 1000 objects per page.
var token *string
for {
out, err := s3Client.ListObjectsV2(ctx, &s3.ListObjectsV2Input{
Bucket: aws.String(srcBucket),
ContinuationToken: token,
})
if err != nil {
log.Fatalf("list: %v", err)
}
for _, obj := range out.Contents {
keys <- aws.ToString(obj.Key)
}
if !aws.ToBool(out.IsTruncated) {
break
}
token = out.NextContinuationToken
}
close(keys)
wg.Wait()
fmt.Printf("\nfinished: %d copied, %d failed\n", copied, failed)
}
func migrate(
ctx context.Context,
src, dst *s3.Client,
srcBucket, dstBucket, key string,
) error {
// Pull the object from S3. The response body is a stream.
// ContentLength is required by PutObject so R2 knows how many
// bytes to expect. Skipping it causes a "MissingContentLength" error.
got, err := src.GetObject(ctx, &s3.GetObjectInput{
Bucket: aws.String(srcBucket),
Key: aws.String(key),
})
if err != nil {
return fmt.Errorf("get: %w", err)
}
defer got.Body.Close()
_, err = dst.PutObject(ctx, &s3.PutObjectInput{
Bucket: aws.String(dstBucket),
Key: aws.String(key),
Body: got.Body,
ContentType: got.ContentType,
ContentLength: got.ContentLength,
})
if err != nil {
return fmt.Errorf("put: %w", err)
}
return nil
}Your go.mod needs these two dependencies:
require (
github.com/aws/aws-sdk-go-v2 v1.32.0
github.com/aws/aws-sdk-go-v2/config v1.28.0
github.com/aws/aws-sdk-go-v2/credentials v1.17.0
github.com/aws/aws-sdk-go-v2/service/s3 v1.66.0
)Run it like this:
S3_REGION=us-east-1 \
S3_ACCESS_KEY_ID=AKIA... \
S3_SECRET_ACCESS_KEY=... \
S3_BUCKET=my-s3-bucket \
R2_ACCOUNT_ID=abc123 \
R2_ACCESS_KEY_ID=... \
R2_SECRET_ACCESS_KEY=... \
R2_BUCKET=my-r2-bucket \
...
go run main.goA few things worth noting about this script. The BaseEndpoint field on s3.Options is the current way to point the Go SDK at a custom endpoint. The older aws.EndpointResolverWithOptions still works but is deprecated. Setting region to "auto" for the R2 client is required. Passing any real region string like "us-east-1" causes the SDK to try to resolve a subdomain from it and fails with a DNS error.
The ContentLength field on PutObjectInput matters. R2 rejects requests where the content length is unknown because it cannot buffer an unbounded stream. Passing got.ContentLength directly from the GetObject response solves that without reading the whole body into memory first.
Failed keys are logged and counted but do not stop the migration. After it finishes, grep the output for FAIL and re-run those keys manually if there are any.
Run the script during a low-traffic window, then enable Sippy for a week to catch anything uploaded between the copy and the cutover. Once Sippy stops pulling from S3, cut the S3 config out of your app. The two methods combine well: the script handles the initial bulk copy with your own filters and logging, and Sippy takes care of the tail.
What the Bill Looks Like in Practice
Take a small app storing 180 GB and serving 400 GB to users each month.
On S3: storage is $4.14, egress is $27 (after the 100 GB free allowance). Total around $31 per month.
On R2: storage is $2.70, Class B reads at the R2 operation rate add roughly $1.20 depending on file count. Egress is $0. Total around $4 per month.
That gap is proportional to your traffic. The egress charge is linear, so the more users you get the worse it looks on S3 and the more irrelevant it becomes on R2.
When to Stay on S3
R2 is the right call for user uploads, product images, generated files and anything your app serves directly to users. The migration is low risk because most Laravel apps use the same small set of storage operations: put, get, delete and url. R2 supports all of those without issue.
Stay on S3 when you need Object Lock for compliance, when your pipeline connects to Athena, EMR or Redshift, when you need Glacier for cold archival data you want to store at under a cent per GB or when your compliance certifications specifically require S3.
The ACL issue is the one real trap. Every other migration step is just changing .env variables and testing your upload flow. Know about the retain_visibility setting before you go live and you will not have a bad day.
AWS built the egress fee into the business model deliberately. Nothing in the pricing page is a mistake. You do not have to keep paying it.


