GCP
Security
Tutorials
Creating Cross-Project Service Accounts in GCP (Console + gcloud CLI)
Most GCP setups don't stay in one project for long. You end up with separate projects for dev, staging, and prod - or separate projects per team, per region, or per acquisition. That's fine; it's how GCP is designed. The problem shows up when you need one credential to read across all of them: for auditing, for a monitoring tool, for a security scanner.
The standard answer is a cross-project service account: create one identity in a dedicated project, grant it read access to every other project, and use that single credential wherever you need organization-wide visibility. This post walks through how to do that correctly using the gcloud CLI as the primary method (no screenshots that go stale when GCP redesigns its console), plus a console walkthrough for those who prefer it. We also cover the specific roles CloudQuery needs, a common API-enablement gotcha, and how to verify access without downloading a key file.
Prerequisites #
Before running any commands, make sure you have:
- gcloud CLI installed and authenticated. Run
gcloud auth loginif you haven't already. Install guide. - A GCP project to host the service account (the dedicated platform project described below).
- The right IAM permissions. To grant roles at the org level, you need the Organization Admin role (
roles/resourcemanager.organizationAdmin) or IAM Admin (roles/iam.organizationRoleAdmin). For project-level grants, Project IAM Admin (roles/resourcemanager.projectIamAdmin) on each target project is enough.
If you're getting
PERMISSION_DENIED on the binding commands, this is almost always the cause.Which Project Should You Create the Service Account In? #
This is worth thinking about before you run any commands. A service account lives in whichever project you create it in - if that project gets deleted, the service account disappears with it, and every tool that used its credentials breaks.
The pattern we'd recommend: create a dedicated project for shared infrastructure identities. Call it something like
my-org-platform or my-org-security. Your workload projects come and go; this one stays. If you already have a shared services or security project in your org, use that.Creating the Service Account (gcloud CLI) #
gcloud iam service-accounts create cloudquery-reader \
--display-name="CloudQuery Read-Only" \
--description="Cross-project read access for CloudQuery syncs" \
--project=my-org-platform
Note the full service account email - you'll use it in every
add-iam-policy-binding call that follows:gcloud iam service-accounts list --project=my-org-platform
# [email protected]
Org-Level vs. Project-Level Access #
Here's where most tutorials stop at "grant Viewer to each project" without explaining the trade-off.
If you have an organization and want access to all current and future projects: grant at the organization level. One command, done. New projects automatically inherit the binding (GCP IAM inheritance docs).
# Get your org ID first
gcloud organizations list
# Grant Viewer at the org level
gcloud organizations add-iam-policy-binding ORG_ID \
--member="serviceAccount:[email protected]" \
--role="roles/viewer"
# CloudQuery also needs this to list enabled services per project
gcloud organizations add-iam-policy-binding ORG_ID \
--member="serviceAccount:[email protected]" \
--role="roles/serviceusage.serviceUsageViewer"
If you only need access to specific projects (or you don't have an organization), grant at the project level and repeat for each one:
for PROJECT_ID in project-a project-b project-c; do
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:[email protected]" \
--role="roles/viewer"
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member="serviceAccount:[email protected]" \
--role="roles/serviceusage.serviceUsageViewer"
done
What Roles Does CloudQuery Need? #
Generic tutorials say "grant Viewer." That's mostly right, but CloudQuery needs two roles to function fully:
If you skip
serviceusage.serviceUsageViewer, CloudQuery can still sync but enabled-service checks will fail and some resource types will return no rows without a clear error. Both roles are read-only. The full list of tables the GCP plugin can sync is on the CloudQuery Hub - check there when deciding which services to enable.The API Enablement Gotcha #
A concrete example: if Compute Engine API is disabled in
project-b, CloudQuery won't sync gcp_compute_instances from that project. No error - only missing rows. We've seen teams spend hours debugging IAM bindings when the actual problem was a disabled API.To check what's enabled:
gcloud services list --enabled --project=project-b
Enable the APIs that correspond to the resource types you want to sync. The list below covers everything in the resource table further down:
for PROJECT_ID in project-a project-b project-c; do
gcloud services enable \
cloudresourcemanager.googleapis.com \
compute.googleapis.com \
storage.googleapis.com \
iam.googleapis.com \
container.googleapis.com \
sqladmin.googleapis.com \
secretmanager.googleapis.com \
cloudkms.googleapis.com \
cloudbuild.googleapis.com \
--project=$PROJECT_ID
done
If a resource type returns no data after a sync, check API enablement before debugging permissions.
Verifying Access via Service Account Impersonation #
Before creating and distributing a JSON key, verify the service account can see what you expect. The
--impersonate-service-account flag lets you adopt the service account's identity temporarily using your own credentials - no key file involved:# Verify compute access in project-b using the service account's identity
gcloud compute instances list \
--project=project-b \
--impersonate-service-account=[email protected]
If you see a list of instances (or an empty list when there are none), the IAM binding worked. A permission error means the binding hasn't propagated yet (give it a few minutes) or the role wasn't applied to the right scope.
This pattern is also a good argument for Workload Identity Federation if you're running CloudQuery on GCP infrastructure - you can authenticate without a key file at all. More on that below.
Using the Console Instead #
If you prefer the GCP console, the flow maps to the same operations. The UI terminology has changed since older tutorials - the current button names are:
Creating the service account:
- Go to IAM & Admin > Service Accounts in your platform project
- Click Create Service Account
- Fill in the name (
cloudquery-reader) and description, then click Create and Continue - Under Basic, assign the Viewer role, click Continue, then Done
Granting access to target projects (repeat per project, or do this at the org level):
- Switch to the target project using the project selector in the top nav
- Go to IAM & Admin > IAM
- Click Grant Access
- Paste the service account email in the New Principals field
- Assign the Viewer role and the Service Usage Viewer role
- Click Save
For org-level access, select your organization (not a specific project) from the project picker at the top before navigating to IAM.
Downloading the JSON key (console):
- Go back to IAM & Admin > Service Accounts in your platform project
- Click the
cloudquery-readerservice account - Open the Keys tab
- Click Add Key > Create New Key
- Select JSON and click Create - the key file downloads to your computer automatically
Store it securely. The next section covers how to point CloudQuery at it.
Downloading the Key and Configuring CloudQuery #
Once you've verified access, create the JSON key:
gcloud iam service-accounts keys create cloudquery-key.json \
--iam-account=[email protected]
Keys don't expire automatically - rotate them on a schedule by deleting the old key and creating a new one:
# List existing keys to get the KEY_ID
gcloud iam service-accounts keys list \
--iam-account=[email protected]
# Delete the old key
gcloud iam service-accounts keys delete KEY_ID \
--iam-account=[email protected]
# Create a replacement
gcloud iam service-accounts keys create cloudquery-key-new.json \
--iam-account=[email protected]
Set the environment variable CloudQuery uses for GCP credentials:
export GOOGLE_APPLICATION_CREDENTIALS=/path/to/cloudquery-key.json
Then create a CloudQuery configuration file. The
project_ids field is optional - omit it and CloudQuery will auto-discover every project the service account can access:# gcp-config.yaml
kind: source
spec:
name: gcp
path: cloudquery/gcp
registry: cloudquery
version: "v22.0.0" # check https://www.cloudquery.io/hub/plugins/source/cloudquery/gcp for latest
tables:
- "*"
destinations:
- postgresql
spec:
project_ids:
- project-a
- project-b
- project-c
# skip tables for APIs that aren't enabled in a project
# prevents silent empty results for disabled services
enabled_services_only: true
---
kind: destination
spec:
name: postgresql
path: cloudquery/postgresql
registry: cloudquery
version: "v8.0.0" # check https://www.cloudquery.io/hub/plugins/destination/cloudquery/postgresql for latest
spec:
connection_string: "${POSTGRES_URL}"
Run the sync:
cloudquery sync gcp-config.yaml
What Does the GCP Plugin Cover? #
The GCP plugin syncs 300+ resource types across all major GCP services. Here are the tables most relevant to a security and governance audit:
The full table list is on the Hub. If you're trying to sync everything from the start, using
tables: ["*"] in the config is fine - you can narrow it down once you know which resource types your team queries most.Verifying Multi-Project Access in SQL #
After the sync completes, this is the fastest way to confirm data arrived from all your projects:
-- Which projects did CloudQuery sync data from?
SELECT DISTINCT project_id, COUNT(*) AS instance_count
FROM gcp_compute_instances
GROUP BY project_id
ORDER BY instance_count DESC;
If a project is missing, go back to the API enablement check first - that's the most common cause.
Two more queries worth running right away. The first finds compute instances with no labels - a common governance gap that's invisible until you can query across every project at once:
-- Unlabeled compute instances across all projects
SELECT project_id, name, zone, machine_type
FROM gcp_compute_instances
WHERE labels IS NULL OR labels = '{}'
ORDER BY project_id;
The second finds active service accounts across your entire fleet - useful for auditing what identities exist before you start tightening permissions:
-- Active service accounts across all projects
SELECT project_id, email, display_name
FROM gcp_iam_service_accounts
WHERE disabled = false
ORDER BY project_id, email;
The whole point of a cross-project service account is that these queries return rows from all your projects in one result set - no switching consoles, no separate API calls per project. For deeper GCP analysis, Building a Cloud Asset Inventory for GCP covers the full data pipeline including dbt transformations on top of this data.
From Ad-Hoc Queries to Continuous Monitoring #
These queries work well for one-off audits. The problem is that cloud infrastructure changes constantly - an unlabeled instance you didn't see last week might appear this week, and you won't know unless you remember to run the query again.
CloudQuery Policies let you register these SQL queries as automated detective controls. Write the same SQL, and CloudQuery evaluates it on every sync. When a new unlabeled instance appears in any of your projects, the Policy flags it - no manual checks required.
When a Policy fires, Automations can route the alert to wherever your team works: a Slack message to your
#cloud-governance channel, a PagerDuty incident, a Jira ticket. The data pipeline from "service account has read access" to "team knows about a violation" gets a lot shorter.Key Files vs. Workload Identity Federation #
The JSON key approach is the most portable option - it works anywhere: your laptop, a VM outside GCP, GitHub Actions. For many teams that's the right call, especially during initial setup.
If you're running CloudQuery on GCP infrastructure (a Compute Engine VM, Cloud Run, GKE), skip the key file and attach the service account directly to the resource:
# Attach the service account when creating a VM
gcloud compute instances create cloudquery-runner \
--service-account=[email protected] \
--scopes=cloud-platform \
--zone=us-central1-a
For local development on your own machine, the simplest option is Application Default Credentials (ADC):
gcloud auth application-default login
This authenticates your local gcloud session, and CloudQuery picks it up automatically through the ADC chain - no
GOOGLE_APPLICATION_CREDENTIALS variable needed. It uses your personal account rather than the service account, so it's for local development only, not production or CI.For external infrastructure (on-prem, other clouds, CI/CD systems), Workload Identity Federation is the keyless alternative. Your external identity - a GitHub Actions OIDC token, an AWS role - exchanges for a short-lived GCP token on each request. No file to leak, no rotation schedule to maintain.
The trade-off is honest: WIF takes meaningfully more setup than downloading a key. If you're moving fast, the key file is fine for now. If you're handling production data across dozens of projects and want a clear audit trail of every access, WIF is worth the extra hour.
Regardless of which auth method you use, every API call CloudQuery makes is logged in GCP Cloud Audit Logs under the service account's email. Security teams can filter Cloud Logging for
principalEmail = "[email protected]" to see exactly what was read and when.Cross-Organization Access #
If your projects span different GCP organizations - common after acquisitions or for agencies managing client infrastructure - the mechanics are the same, but you're granting IAM from a different organization's console or API.
The service account from
my-org-platform still works; you grant it access as a member in the external organization's projects:gcloud projects add-iam-policy-binding external-org-project \
--member="serviceAccount:[email protected]" \
--role="roles/viewer"
Two things to know: first, someone in the external organization has to run this command - you can't grant yourself access to a project you don't administer. Second, the org policy constraint
constraints/iam.disableCrossProjectServiceAccountUsage in the external org can block this. If it does, the external org admin needs to update that policy before the binding will take effect.FAQ #
Why does my service account exist in project A but I'm granting it access in project B?
A service account is a project-level resource - it's created in one project and can be granted access to any other project. The email format (
[email protected]) always reflects the home project, but the IAM binding in project B is what controls access there.Do I need to create a new service account in each project?
No. One service account, many IAM bindings. That's the point.
Can I grant access at the folder level instead of the org level?
Yes.
gcloud resource-manager folders add-iam-policy-binding FOLDER_ID --member=... --role=... works the same way as the org-level command and covers all projects within that folder.CloudQuery is returning no data from a project even though the IAM binding looks right. What should I check?
In order: (1) API enablement - run
gcloud services list --enabled --project=PROJECT_ID and look for the relevant APIs. (2) IAM propagation - bindings can take a few minutes to propagate globally. (3) The service account email - double-check for typos. (4) CloudQuery's sync logs for explicit permission errors.Should I use one service account for all tools or separate accounts per tool?
We'd use separate accounts per tool. When an account is compromised or you need to revoke access for one tool, you don't want it to take down your other monitoring. The extra IAM bindings are cheap.
Can I automate this with Terraform?
Yes. The Google provider's
google_service_account resource creates the service account, and google_project_iam_binding (or google_organization_iam_binding for org-level grants) handles the role assignments. If you're already managing GCP IAM with Terraform, that's the cleaner long-term approach over running gcloud commands by hand.