CloudQuery News
Find the Cloud Waste Contest
We've loaded a bunch of AWS infrastructure data into a public Postgres database. Your job? Sync it with CloudQuery, run some SQL, and hunt down wasteful resources.
Complete challenges, win swag. The more you find, the more you win.
The Challenges #
Level 1: Unattached Volumes -> CloudQuery T-shirt #
Find all EBS volumes sitting in 'available' state that aren't attached to any EC2 instance. These are burning storage costs for nobody.
Level 2: Dead Weight Storage → CloudQuery Mug #
Find volumes attached to EC2 instances that aren't running. Stopped or terminated instances don't need their volumes mounted, but they're still racking up charges.
Level 3: Public IPs → CloudQuery Sweatshirt #
Find ALL cloud resources with public IPs. This includes EC2 instances with public IPs, Elastic IPs (especially unassociated ones costing you money), internet-facing load balancers, and publicly accessible S3 buckets.
How It Works #
- We've loaded AWS data into our read-only Postgres database
- You configure CloudQuery to use our Postgres as your SOURCE
- Run a CloudQuery sync to pull the data from our Postgres to your destination
- Query your local data to solve the challenges
- Post your results and claim your swag
Connection string:
postgresql://contest_readonly:readonly_secure_password_2024!@ep-silent-field-ad6w87nf-pooler.c-2.us-east-1.aws.neon.tech/cloudquery?sslmode=require&channel_binding=require
Getting Started Syncing from Postgres #
Step 1: Install CloudQuery CLI
Download from cloudquery.io/download
Step 2: Create your config file
Initialize a new CloudQuery project:
cloudquery init
This creates a starter config file. You need TWO things: a PostgreSQL SOURCE (our database) and a DESTINATION (your choice).
Edit your config file to sync from our Neon Postgres to SQLite locally:
kind: source
spec:
name: postgresql
path: cloudquery/postgresql
registry: cloudquery
version: 'v8.12.1'
destinations: ['sqlite']
tables: ['*'] # Sync all contest tables
spec:
connection_string: '${CONTEST_DB_CONNECTION}' # Our read-only Postgres
---
kind: destination
spec:
name: sqlite
path: cloudquery/sqlite
registry: cloudquery
version: 'v2.9.26'
spec:
connection_string: './contest_data.db'
Want to use a different destination? Check out all available options:
- PostgreSQL Source Plugin: hub.cloudquery.io/plugins/source/cloudquery/postgresql
- All Destination Plugins: hub.cloudquery.io/plugins/destination
Popular choices include DuckDB, BigQuery, Snowflake, CSV files, and 50+ others. The setup is the same - just swap the destination spec.
Step 3: Set the connection string
export CONTEST_DB_CONNECTION="postgresql://contest_readonly:readonly_secure_password_2024!@ep-silent-field-ad6w87nf-pooler.c-2.us-east-1.aws.neon.tech/cloudquery?sslmode=require&channel_binding=require"
Step 4: Run the sync
cloudquery sync config.yml
This pulls all the AWS data from our Neon Postgres into your destination.
Step 5: Query your local data
Now write your SQL queries against your destination to solve the challenges!
Claiming Your Prize #
Post your work anywhere and tag us, or send it directly:
- X (Twitter): Tag @cloudqueryio with your results
- LinkedIn: Tag @cloudqueryio
- Threads: Tag @cloudqueryio
- Community forum: Post in General community.cloudquery.io
- Email: [email protected]
What to include:
- Screenshot showing your CloudQuery sync completion
- The queries you wrote
- Query results showing your answers
Need help? #
- Quickstart guide: cloudquery.io/docs/quickstart
- PostgreSQL source plugin: hub.cloudquery.io/plugins/source/cloudquery/postgresql
- All destination plugins: hub.cloudquery.io/plugins/destination
- Neon Postgres: neon.tech
- Questions? Ask in the community forum or hit us up on social. Let's see what you find.
FAQ #
Q: Do I need AWS credentials to participate?
A: No. The data is already in our Postgres database. You just need CloudQuery CLI and a destination to sync to.
Q: Can I query the Postgres database directly without syncing?
A: The database is read-only, so technically yes, but that defeats the purpose. We want you to experience CloudQuery's sync workflow. Plus, you need to show proof of sync for the swag.
Q: What if I get the wrong answer?
A: That's fine - we want to see your process. Submit what you found, show your queries, and we'll work with you. This is about learning CloudQuery, not being perfect.
Q: Do I need to complete all three challenges?
A: No. Complete what you can. Each challenge you finish gets you swag. Challenge 1 = T-shirt. Challenge 2 = Mug. Challenge 3 = Sweatshirt.
Q: Which destination should I use?
A: Whatever you're comfortable with. SQLite requires zero setup. DuckDB is great for analytics. Your own Postgres if you have one running. CSV files if you want to load into Excel. Pick what works for you.
Q: Can I use AI tools to help write queries?
A: Yes. We're not testing your SQL memorization skills. We want to see you use CloudQuery and find the answers. Use whatever tools help you learn.
Q: What if my sync fails?
A: Check the connection string is set correctly. Make sure you have write permissions for your destination. Run with
--log-level debug to see detailed errors. Ask in our community forum - we're here to help.Q: Is this real AWS data?
A: It's based on real AWS resource structures, but sanitized and anonymized for the contest. All account IDs, resource names, and sensitive info have been changed.
Q: Can I share my queries with others?
A: Yes, but where's the fun in that? The learning happens when you figure it out yourself. Share your approach, not your answers.
Q: Do I need to know AWS to participate?
A: Basic understanding helps, but the hints tell you what to look for. This is more about SQL and CloudQuery than deep AWS knowledge.