Introduction to AI Cloud Security - Discover how to revolutionize your cloud security operations with artificial intelligence. Register now ❯

CloudQuery

CloudQuery News
Integrations

New: Replicate data from Snowflake with the CloudQuery source plugin

Snowflake is the data warehouse of choice for thousands of teams, but moving that data can be brittle, slow, and waste too many engineering hours.
Whether replicating to a cost-effective database for analytics, into a data lake for cost-effective storage, or into an open lakehouse architecture like Iceberg, getting data out of Snowflake can be painful.
With the new CloudQuery Snowflake Source Plugin, you can now replicate or migrate Snowflake tables into any CloudQuery-supported destination with a single init and sync.

Replicating data from Snowflake #

Data engineers know the pain of data movement:
  • Migrations: Consolidating Snowflake accounts or shifting workloads to another platform can be slow and error-prone.
  • Replication: Teams need Snowflake data side-by-side with other systems (Postgres, BigQuery, etc.) or in open formats like Iceberg for long-term flexibility.
  • Change management: Managing APIs or slow queries can cause hours of rework.
As a high-performance ELT framework, CloudQuery lets you quickly and repeatably move data from (or to!) Snowflake without running afoul of poor performance or API rate limits.

Sample config #

Here’s what a simple Snowflake → SQLite replication config looks like:
kind: source
spec:
  name: "snowflake"
  registry: "cloudquery"
  path: "cloudquery/snowflake"
  version: "v1.1.0"
  tables: ["*"]
  destinations: ["sqlite"]
  spec:
    connection_string: "${SNOWFLAKE_CONNECTION_STRING}" # set the environment variable in a format like "username:password@organization-account/database?schema=public"
---
kind: destination
spec:
  name: sqlite
  path: cloudquery/sqlite
  registry: cloudquery
  version: "v2.13.1"
  # Learn more about the configuration options at https://cql.ink/sqlite_destination
  spec:
    connection_string: ./db.sql
That’s it. With one config, CloudQuery connects to Snowflake, auto-generates tables from your schema, and syncs them into the destination.
  • Tables → Select specific tables or just "*" to pull everything.
  • Destination → Load into any destination: other managed OLAP databases, vector databases, or even files.
  • Performance → Tune concurrency within the spec to improve throughput based on your requirements.

Example use cases #

  • Cost reduction: Replicating Snowflake into a lower-cost database can help you solve analytics-heavy challenges.
  • Lakehouse replication: Export Snowflake tables into S3 or GCS in Iceberg/Parquet format for scalable, low-cost storage and analytics.
  • Multi-cloud replication: Move Snowflake data into AWS-native tools like Athena, or GCP-native tools like BigQuery.
  • Migrations: Consolidate multiple Snowflake accounts or migrate Snowflake → any CQ destination during platform shifts.
  • Backups & compliance: Keep a replica of your Snowflake data in another warehouse or lake without breaking the bank.

Get started #

  1. Initialize the Snowflake integration, cloudquery init --source aws --destination bigquery
  2. Run cloudquery sync to start moving data.
Note: the first ~22 seconds here are waiting for the Snowflake warehouse to spin up.
Full docs and configuration options are available in the CloudQuery Snowflake plugin hub.

Build durable replication #

CloudQuery makes it simple for data engineers to replicate, migrate, and manage Snowflake data across warehouses, lakes, and lakehouses: giving you full control, and choice, over where your data lives…without sacrificing performance.
👉 Try the Snowflake Source Plugin today and see how quickly you can move your data with CloudQuery.

Related posts

Turn cloud chaos into clarity

Find out how CloudQuery can help you get clarity from a chaotic cloud environment with a personalized conversation and demo.


© 2025 CloudQuery, Inc. All rights reserved.