CloudQuery News
Find the Cloud Waste Contest
We've loaded AWS infrastructure data into a public S3 bucket. Sync it with CloudQuery into your own destination, run SQL queries, and hunt down wasteful resources.
Bonus: Got your own AWS account? Sync your own data, share your queries, and we'll send you swag too. Show your boss how much money you're saving and win double.
The Challenges #
Level 1: Unattached Volumes → CloudQuery T-shirt #
Find all EBS volumes sitting in 'available' state that aren't attached to any EC2 instance. These are burning storage costs for nobody.
Level 2: Dead Weight Storage → CloudQuery Mug #
Find volumes attached to EC2 instances that aren't running. Stopped or terminated instances don't need their volumes mounted, but they're still racking up charges.
Level 3: Public IPs → CloudQuery Sweatshirt #
Find ALL cloud resources with public IPs. This includes EC2 instances with public IPs, Elastic IPs (especially unassociated ones costing you money), internet-facing load balancers, and publicly accessible S3 buckets.
How It Works #
- We've loaded AWS data into our public S3 bucket
- You configure CloudQuery to use our S3 bucket as your SOURCE
- Run a CloudQuery sync to pull the data from S3 to your destination
- Query your local data to solve the challenges
- Post your results and claim your swag
S3 Bucket Details:
- Bucket:
cloudquery-contest-readonly-2025
- Region:
us-east-1
- Format: Parquet files
- Public read-only access (no AWS credentials needed)
Tables available:
aws_ec2_instances
aws_ec2_ebs_volumes
aws_ec2_eips
aws_elbv1_load_balancers
aws_elbv2_load_balancers
aws_s3_buckets
aws_s3_bucket_policies
aws_ec2_network_interfaces
Getting Started: Syncing from S3 #
Step 1: Install CloudQuery CLI
Download from CloudQuery.io/download
Step 2: Create your config file
Initialize a new CloudQuery project:
cloudquery init
This creates a starter config file. You need TWO things: an S3 SOURCE (our bucket) and a DESTINATION (your choice).
Edit your config file to sync from our S3 bucket to SQLite locally:
kind: source
spec:
name: s3
path: cloudquery/s3
registry: cloudquery
version: "v1.8.17"
tables: ["*"]
destinations:
- sqlite
spec:
bucket: "cloudquery-contest-readonly-2025"
region: "us-east-1"
path: "cloudquery-contest/{{TABLE}}/*.parquet"
format: "parquet"
---
kind: destination
spec:
name: sqlite
path: cloudquery/sqlite
registry: cloudquery
version: "v2.13.1"
write_mode: "overwrite-delete-stale"
spec:
connection_string: "./cloudquery-contest.db"
Want to use a different destination? Check out all available options:
- S3 Source Plugin: hub.CloudQuery.io/plugins/source/CloudQuery/s3
- All Destination Plugins: hub.CloudQuery.io/plugins/destination
Popular choices include DuckDB, BigQuery, Snowflake, CSV files, and 50+ others. The setup is the same - just swap the destination spec.
Step 3: Run the sync
cloudquery sync config.yml
This pulls all the AWS data from our S3 bucket into your destination. No AWS credentials needed - the bucket is public.
Step 4: Query your local data
Now write your SQL queries against your destination to solve the challenges!
# If using SQLite
sqlite3 cloudquery-contest.db
# If using DuckDB
duckdb cloudquery-contest.duckdb
# If using your own Postgres
psql your_connection_string
Using Your Own AWS Data #
Got your own AWS account? You can sync your own data instead:
- Configure CloudQuery with the AWS source plugin
- Sync your AWS resources to any destination
- Run the same challenge queries against your data
- Share your queries and results with us for swag
This way you can show your boss the actual cost savings in your infrastructure.
Claiming Your Prize #
Post your work anywhere and tag us, or send it directly:
- X (Twitter): Tag @cloudqueryio with your results
- LinkedIn: Tag @cloudqueryio
- Threads: Tag @cloudqueryio
- Community forum: Post in General community.CloudQuery.io
- Email: [email protected]
What to include:
- Screenshot showing your CloudQuery sync completion
- The queries you wrote
- Query results showing your answers
Need help? #
- Quickstart guide: CloudQuery.io/docs/quickstart
- S3 source plugin: hub.CloudQuery.io/plugins/source/CloudQuery/s3
- AWS source plugin: hub.CloudQuery.io/plugins/source/CloudQuery/aws
- All destination plugins: hub.CloudQuery.io/plugins/destination
- Questions? Ask in the community forum or hit us up on social. Let's see what you find.
FAQ #
Q: Do I need AWS credentials to participate?
A: No. The S3 bucket is public and read-only. You just need CloudQuery CLI and a destination to sync to.
Q: Can I download the files directly from S3 without syncing?
A: Technically yes, but that defeats the purpose. We want you to experience CloudQuery's sync workflow. Plus, you need to show proof of sync for the swag.
Q: What if I get the wrong answer?
A: That's fine - we want to see your process. Submit what you found, show your queries, and we'll work with you. This is about learning CloudQuery, not being perfect.
Q: Do I need to complete all three challenges?
A: No. Complete what you can. Each challenge you finish gets you swag. Challenge 1 = T-shirt. Challenge 2 = Mug. Challenge 3 = Sweatshirt.
Q: Which destination should I use?
A: Whatever you're comfortable with. SQLite requires zero setup. DuckDB is great for analytics. Your own Postgres if you have one running. CSV files if you want to load into Excel. Pick what works for you.
Q: Can I use AI tools to help write queries?
A: Yes. We're not testing your SQL memorization skills. We want to see you use CloudQuery and find the answers. Use whatever tools help you learn.
Q: What if my sync fails?
A: Make sure you have write permissions for your destination. Run with
--log-level debug
to see detailed errors. Ask in our community forum - we're here to help.Q: Is this real AWS data?
A: It's based on real AWS resource structures, but sanitized and anonymized for the contest. All account IDs, resource names, and sensitive info have been changed.
Q: Can I share my queries with others?
A: Yes, but where's the fun in that? The learning happens when you figure it out yourself. Share your approach, not your answers.
Q: Do I need to know AWS to participate?
A: Basic understanding helps, but the hints tell you what to look for. This is more about SQL and CloudQuery than deep AWS knowledge.
Q: Can I really use my own AWS data?
A: Absolutely. Just share your queries and results (sanitized if needed) and we'll send you swag. We're excited to see how you analyze your own infrastructure.