Export from AWS to S3
CloudQuery is an open-source data integration platform that allows you to export data from any source to any destination.
The CloudQuery AWS plugin allows you to sync data from AWS to any destination, including S3. It takes only minutes to get started.
AWS
The AWS Source plugin extracts information from many of the supported services by Amazon Web Services (AWS) and loads it into any supported CloudQuery destination. Some tables are marked as premium and have a price per 1M rows synced.
S3
This destination plugin lets you sync data from a CloudQuery source to remote S3 storage in various formats such as CSV, JSON and Parquet
Table of Contents
MacOS Setup
Step 1. Install CloudQuery
brew install cloudquery/tap/cloudquery
Step 2. Log in to CloudQuery CLI
Logging in is required to use premium plugins and premium tables in open-core plugins.
cloudquery login
Step 3. Configure AWS source plugin
You can find more information about the configuration in the plugin documentation
kind: source
spec:
# Source spec section
name: aws
path: cloudquery/aws
registry: cloudquery
version: "v26.0.0"
tables: ["aws_ec2_instances"]
destinations: ["s3"]
spec:
# Optional parameters
# regions: []
# accounts: []
# org: nil
# concurrency: 50000
# initialization_concurrency: 4
# aws_debug: false
# max_retries: 10
# max_backoff: 30
# custom_endpoint_url: ""
# custom_endpoint_hostname_immutable: nil # required when custom_endpoint_url is set
# custom_endpoint_partition_id: "" # required when custom_endpoint_url is set
# custom_endpoint_signing_region: "" # required when custom_endpoint_url is set
# use_paid_apis: false
# table_options: nil
# scheduler: shuffle # options are: dfs, round-robin or shuffle
# use_nested_table_rate_limiting: false
# enable_api_level_tracing: false
Step 4. Configure S3 destination plugin
You can find more information about the configuration in the plugin documentation
kind: destination
spec:
name: "s3"
path: "cloudquery/s3"
registry: "cloudquery"
version: "v6.0.0"
write_mode: "append"
spec:
bucket: "bucket_name"
region: "region-name" # Example: us-east-1
path: "path/to/files/{{TABLE}}/{{UUID}}.{{FORMAT}}"
format: "parquet" # options: parquet, json, csv
format_spec:
# CSV-specific parameters:
# delimiter: ","
# skip_header: false
# Optional parameters
# compression: "" # options: gzip
# no_rotate: false
# athena: false # <- set this to true for Athena compatibility
# test_write: true # tests the ability to write to the bucket before processing the data
# endpoint: "" # Endpoint to use for S3 API calls.
# endpoint_skip_tls_verify # Disable TLS verification if using an untrusted certificate
# use_path_style: false
# batch_size: 10000 # 10K entries
# batch_size_bytes: 52428800 # 50 MiB
# batch_timeout: 30s # 30 seconds
Step 5. Run Sync
cloudquery sync aws.yml s3.yml