TLDR: Enabling uniform_bucket_level_access on a GCS bucket disables the legacy role bindings that project owners rely on for object access. Your Terraform SA with roles/owner will get 403s reading objects it could read moments earlier. Add explicit roles/storage.objectViewer bindings on the bucket before flipping the setting.
I’d recommend uniform bucket-level access for most buckets. Without it, GCS uses two overlapping permission systems: bucket-level IAM and per-object ACLs. You can’t see object ACLs in get-iam-policy, so auditing access means checking two places. With uniform access, you manage all permissions through bucket-level IAM. Google recommends it for new buckets, and you can enforce it project-wide via org policies.
I hit the problem when migrating an existing bucket.
I had a CI pipeline uploading versioned zip files to a GCS bucket. The uploads kept failing with storage.objects.delete permission errors on new objects. The bucket used legacy (fine-grained) ACLs, and gcloud storage cp required delete permission I didn’t want to grant. I enabled uniform_bucket_level_access to fix that, letting objectCreator work as expected.
The Terraform apply that flipped the setting succeeded. Subsequent applies failed with:
Error: Error retrieving storage bucket object: googleapi: Error 403:
<sa>@<project>.iam.gserviceaccount.com does not have storage.objects.get
access to the Google Cloud Storage object.
Permission 'storage.objects.get' denied on resource (or it may not exist).
The service account had roles/owner at the project level. It had been reading this bucket for months. I changed one bucket setting and nothing else.
How project owners access GCS objects Link to heading
Without uniform access, GCS grants every bucket a set of legacy role bindings based on project membership:
projectOwner:<project> → roles/storage.legacyBucketOwner
projectEditor:<project> → roles/storage.legacyBucketWriter
projectViewer:<project> → roles/storage.legacyBucketReader
legacyBucketOwner includes storage.objects.get, storage.objects.list, storage.objects.create, and storage.objects.delete. An SA with roles/owner at the project level inherits object access on all buckets through this chain.
You can check your bucket’s bindings with:
gcloud storage buckets get-iam-policy gs://your-bucket
Look for projectOwner:<project> → roles/storage.legacyBucketOwner.
Uniform access disables legacy bindings Link to heading
When you enable uniform_bucket_level_access, GCS stops evaluating legacy/ACL-based access. The documentation says:
Cloud IAM policies that use the ACL-based predefined roles (such as
roles/storage.legacyBucketReader) are no longer effective for buckets with uniform bucket-level access enabled.
The projectOwner → legacyBucketOwner bindings still show up in get-iam-policy, but GCS ignores them. Only explicit IAM bindings take effect. The change is immediate. You won’t see a warning from GCS or a plan diff in Terraform telling you that access paths just broke.
The fix Link to heading
Add explicit IAM bindings for each SA that reads from the bucket:
resource "google_storage_bucket_iam_member" "terraform_reader" {
bucket = google_storage_bucket.cloud_functions.name
role = "roles/storage.objectViewer"
member = "serviceAccount:terraform@<project>.iam.gserviceaccount.com"
}
If Terraform itself reads objects in the bucket (e.g. via data "google_storage_bucket_object"), you’ll hit a chicken-and-egg problem: Terraform can’t apply because it can’t read the bucket, and it can’t grant itself access without applying. Grant the binding via gcloud first:
gcloud storage buckets add-iam-policy-binding gs://your-bucket \
--member="serviceAccount:terraform@<project>.iam.gserviceaccount.com" \
--role="roles/storage.objectViewer"
Then run terraform apply to codify it.
Migrating safely Link to heading
Before you enable uniform_bucket_level_access on an existing bucket:
- Audit. Run
gcloud storage buckets get-iam-policyand identify which principals access the bucket only through legacy bindings. - Add explicit grants first. Grant
roles/storage.objectViewer(or whatever each SA needs) on the bucket before flipping the setting. This avoids the chicken-and-egg problem. roles/ownerat the project level only works through legacy bindings on GCS. Uniform access disables those bindings, so you need bucket-level IAM grants regardless of project role.
Further reading Link to heading
- Uniform bucket-level access — the official GCS documentation
- IAM roles for Cloud Storage — lists what permissions each role includes