Sink data from RisingWave to Google BigQuery
This guide describes how to sink data from RisingWave to Google BigQuery.
BigQuery is Google’s fully managed data warehouse and data analytics platform, capable of handling and analyzing large volumes of data as it is highly scalable.
You can test out this process on your own device by using the big-query-sink
demo in the integration_test directory of the RisingWave repository.
PREMIUM EDITION FEATURE
This is a Premium Edition feature. All Premium Edition features are available out of the box without additional cost on RisingWave Cloud. For self-hosted deployments, users need to purchase a license key to access this feature. To purchase a license key, please contact sales team at sales@risingwave-labs.com.
For a full list of Premium Edition features, see RisingWave Premium Edition.
PUBLIC PREVIEW
This feature is currently in public preview, meaning it is nearing the final product but may not yet be fully stable. If you encounter any issues or have feedback, please reach out to us via our Slack channel. Your input is valuable in helping us improve this feature. For more details, see our Public Preview Feature List.
Prerequisites
Before sinking data from RisingWave to BigQuery, please ensure the following:
- The BigQuery table you want to sink to is accessible from RisingWave.
- Ensure you have an upstream materialized view or table in RisingWave that you can sink data from.
Syntax
Parameters
Parameter Names | Description |
---|---|
sink_name | Name of the sink to be created. |
sink_from | A clause that specifies the direct source from which data will be output. sink_from can be a materialized view or a table. Either this clause or select_query query must be specified. |
AS select_query | A SELECT query that specifies the data to be output to the sink. Either this query or a sink_from clause must be specified. See SELECT for the syntax and examples of the SELECT command. |
type | Required. Data format. Allowed formats:
|
force_append_only | Optional. If true, forces the sink to be append-only, even if it cannot be. |
bigquery.local.path | Optional. The file path leading to the JSON key file located in your local server. Details can be found in Service Accounts under your Google Cloud account. Either bigquery.local.path or bigquery.s3.path must be specified. |
bigquery.s3.path | Optional. The file path leading to the JSON key file located in S3. Details can be found in Service Accounts under your Google Cloud account. At least one of bigquery.local.path or bigquery.s3.path must be specified. |
bigquery.project | Required. The BigQuery project ID. |
bigquery.dataset | Required. The BigQuery dataset ID. |
bigquery.table | Required. The BigQuery table you want to sink to. |
bigquery.retry_times | Optional. The number of times the system should retry a BigQuery insert operation before ultimately returning an error. Defaults to 5. |
auto_create | Optional. Defaults to false. If true, a new table will be automatically created in BigQuery when the specified table is not found. |
aws.credentials.access_key_id | Optional. The access key of the S3 file. This must be specified if sinking to an S3 file. |
aws.credentials.secret_access_key | Optional. The secret access key of the S3 file. This must be specified if sinking to an S3 file. |
region | Optional. The service region of the S3 file. This must be specified if sinking to an S3 file. |
Examples
We can create a BigQuery sink with a local JSON key file.
Or we can create a BigQuery sink with an S3 JSON key file.
Data type mapping
RisingWave Data Type | BigQuery Data Type |
---|---|
boolean | bool |
smallint | int64 |
integer | int64 |
bigint | int64 |
real | unsupported |
double precision | float64 |
numeric | numeric |
date | date |
character varying (varchar) | string |
time without time zone | time |
timestamp without time zone | datetime |
timestamp with time zone | timestamp |
interval | interval |
struct | struct |
array | array |
bytea | bytes |
JSONB | JSON |
serial | int64 |
Was this page helpful?