This guide assumes that you are somewhat familiar with CloudQuery. If you are not, we recommend starting by reading the Quickstart guide and playing around with the CloudQuery CLI a bit first.
Before we dive in, let's quickly cover some core concepts of CloudQuery plugins, so that they're familiar when we see our first example.
A sync is the process that gets kicked off when a user runs
cloudquery sync. A sync is responsible for fetching data from a third-party API and inserting it into the destination (database, data lake, stream, etc.). When you write a source plugin for CloudQuery, you will only need to implement the part that interfaces with the third-party API. The rest of the sync process, such as delivering to the destination database, is handled by the CloudQuery SDK.
A table is the term CloudQuery uses for a collection of related data. In most databases it directly maps to an actual database table, but in some destinations it could be stored as a file, stream or other medium. A table is defined by a name, a list of columns, and a resolver function. The resolver function is responsible for fetching data from the third-party API and sending it to CloudQuery. We will look at examples of this soon!
Every table will typically have its own
.js file inside the plugin
Resolvers are functions associated with a table that get called when it's time to populate data for that table. There are two types of resolvers:
TableResolver (opens in a new tab) TypeScript type. For top-level tables,
resolve will only be called once per multiplexer client. For dependent tables, the resolver will be called once for each parent row, and the parent resource will be passed in as well. (More on this, and multiplexers, shortly.)
Column resolvers (opens in a new tab) are responsible for mapping data from the third-party API into the columns of the table. In most cases, you will not need to implement this, as the SDK will automatically map data from the struct passed in by the table resolver to the columns of the table. But in some cases, you may need to implement a custom column resolver to fetch additional data or do custom transformations.
Multiplexers (opens in a new tab) are a way to parallelize the fetching of data from the third-party API. Some top-level tables require multiple calls to fetch all their data. For example, a sync for the GitHub source plugin that fetches data for multiple organizations, will need to make one call per organization to list all repositories. By multiplexing over organizations, these top-level queries can also be done in parallel. Each table defines the multiplexer that it should use. The CloudQuery plugin SDK will then call the table resolver once for each client in the multiplexer. Many plugins will not need to use multiplexers, but they are useful for plugins that need to fetch data for multiple accounts, organizations, or other entities.
See example in the Airtable plugin (opens in a new tab).
Before running the plugin locally, you will need to install its dependencies:
If you copied a reference plugin, this will include the CloudQuery plugin SDK. If you are starting from scratch, you will need to install the SDK separately:
There are two options for running a plugin before as a developer before it is released: as a gRPC server, or as a standalone binary. We will briefly summarize both options here, or you can read about them in more detail in Running Locally.
This mode is especially useful for setting breakpoints your code for debugging, as you can run it in server mode from your IDE and attach a debugger to it. To run the plugin as a gRPC server, you can run the following command in the root of the plugin directory:
# Assuming you copied a reference plugin npm run dev # Or if you are starting from scratch node 'path-to-main-node-file' serve
This will start a gRPC server on port 7777. You can then create a config file that sets the
path properties to point to this server. For example:
kind: source spec: name: "my-plugin" registry: "grpc" path: "localhost:7777" version: "v1.0.0" tables: ["*"] destinations: - "sqlite" --- kind: destination spec: name: sqlite path: cloudquery/sqlite version: "v1.2.1" spec: connection_string: ./db.sql
With the above configuration, we can now run
cloudquery sync as normal:
cloudquery sync config.yaml
Note that when running a source plugin as a gRPC server, errors with the source plugin will be printed to the console running the gRPC server, not to the CloudQuery log like usual.
You can also build a Docker container for the plugin, and then either run it directly as a gRPC server or via the
We need to first build the image:
docker build -t my-plugin:latest .
And then we can specify the
docker registry in our config file:
kind: source spec: name: "my-plugin" registry: "docker" path: "my-plugin:latest" tables: ["*"] destinations: - "sqlite" --- kind: destination spec: name: sqlite path: cloudquery/sqlite version: "v2.4.9" spec: connection_string: ./db.sql
node 'path-to-main-node-file'. You can see an example Dockerfile here (opens in a new tab).
Once published, users can then import your plugin by specifying the image path in their config file together with the
docker registry, e.g.:
kind: source spec: name: cloudwidgets path: ghcr.io/myorg/cloudwidgets registry: docker
This will download and run the plugin as a Docker container when
cloudquery sync is run.
- The Airtable Source Plugin (opens in a new tab) is an example of dynamically generating tables based on the schema of a third-party API and mapping API types to arrow types.