跳到主要内容

Google Cloud Spanner

Target DataSource:

Connection

Basic Functions

FunctionDescription
Schema Migration

If the target schema does not exist, BladePipe will automatically generate and execute CREATE statements based on the source metadata and the mapping rule.

Full Data Migration

Migrate data by sequentially scanning data in tables and writing it in batches to the target database.

Incremental Data Sync

Sync of common DML like INSERT, UPDATE, DELETE is supported.

Data Verification and Correction

Verify all existing data. Optionally, you can correct the inconsistent data based on verification results. Scheduled DataTasks are supported.
For more information, see Create Verification and Correction DataJob.

Subscription Modification

Add, delete, or modify the subscribed tables with support for historical data migration. For more information, see Modify Subscription.

Table Name Mapping

Support the mapping rules, namely, keeping the name the same as that in Source, converting the text to lowercase, converting the text to uppercase, truncating the name by "_digit" suffix.

Metadata Retrieval

Retrieve the target metadata with filtering conditions or target primary keys set from the source table.

Position Resetting

Reset positions by timestamp. Allow re-consumption of incremental data from a specific point in time via Change Streams.

Advanced Functions

FunctionDescription
Removal of Target Data before Full Data Migration

Remove the existing data in the Target before running the Full Data Migration, applicable for DataJobs reruning and scheduled Full Data migrations.

Recreating Target Table

Recreate target tables before running the Full Data Migration, applicable for DataJobs reruning and scheduled Full Data migrations.

Stream Load

Use Stream Load to write data to StarRocks BE. By default, batch write is adopted, with dynamic adjustment of data flush interval and batch size.

Handling of Zero Value for Time

Allow setting zero value for time to different data types to prevent errors when writing to the Target.

Custom Table Properties

Include settings for properties such as bucket count and replica count.

Setting Data Partitions

When creating a DataJob, specify partition definitions at the table level (static or dynamic). Automatically add these partition definitions during schema migration.

Scheduled Full Data Migration

For more information, see Create Scheduled Full Data DataJob.

Custom Code

For more information, see Custom Code Processing, Debug Custom Code and Logging in Custom Code.

Adding Virtual Columns

Support adding custom virtual columns with fixed values, such as region, ID, etc.

Setting Target Primary Key

Change the primary key to another field to facilitate data aggregation and other operations.

Data Filtering Conditions

Support data filtering using WHERE conditions, with SQL-92 as the SQL language. For more information, see Data Filtering.

Limits

LimitDescription
Google Cloud API

Requires Google Cloud Spanner API to be enabled for your project.

Target Table Type

Only support Primary Key model.

Source Table Type

Migration and sync of tables without primary keys are not supported.

DDL Synchronization Errors
  • Continuous DDLs on the same table will cause errors (because asynchronous DDLs are executed on a target StarRocks instance).
  • Errors may occur when modifying field constraints or some types of DDL.
  • If DDL errors occur, you can change the target table schema and then skip the errors by setting DataJob parameters.
Incremental Data Write Conflict Resolution Rule

Using Stream Load method, the primary key is used for full row replacement.


Source

Prerequisites

PrerequisiteDescription
Permissions for Service Account

See Permissions Required for Spanner.

Change Streams

Requires enabling Change Streams on the Spanner database to capture incremental changes.

Parameters

ParameterDescription
spannerProjectId

Google Cloud Project ID

spannerInstanceId

Spanner Instance ID

spannerDatabaseId

Spanner Database ID

credentialsPath

Path or URL to the Google Cloud Service Account JSON credential file.

fullBatchSize

Batch size used during Full Data Migration.

fullPagingCount

Paging partition size used during Full Data Migration.

scanParallel

Number of threads for parallel scanning during Full Data Migration.

snapshotRead

Whether to use snapshot read for scanning data. Helpful for providing a strong consistency point.

increStartPosition

Incremental start position timestamp for Change Data Capture (CDC).

heartbeatIntervalMs

Change Stream heartbeat interval (ms)

filterDDL

Whether to filter out DDL statements in Incremental Synchronization.

fullDataSqlConditionEnabled

Add filtering conditions in SQL during source data scanning in Full Data migration.

Tips: To modify the general parameters, see General Parameters and Functions.


Target

Prerequisites

PrerequisiteDescription
Permissions for Account

SELECT and DDL permissions (optional)

Port Preparation

Allow the migration and sync node (Worker) to connect to the StarRocks FE QueryPort and FE/BE HttpPort.

Parameters

ParameterDescription
host

MySQL port, corresponding to StarRocks FE QueryPort.

httpHost

Host for StarRocks stream load, corresponding to StarRocks FE/BE HttpPort.

totalDataInMemMb

Maximum data size allowed in memory when writing in batches; If the data size exceeds the memory limit, or the wait time exceeds asyncFlushIntervalSec, then data is flushed to the write queue.

asyncFlushIntervalSec

Interval to wait for flushing when writing in batches; If the wait time exceeds asyncFlushIntervalSec, or the data size exceeds totalDataInMemMb, then data is flushed to the write queue.

flushBatchMb

Maximum batch size per table; If the batch size exceeds this limit, then data is flushed to the write queue.

realFlushPauseSec

Wait time to flush data to StarRocks using stream load, 0 means no wait is needed.

soTimeoutSec

TCP socket timeout (so_timeout) during QueryPort operations.

httpSoTimeoutSec

TCP socket timeout (so_timeout) during HttpPort operations.

enableTimeZoneProcess

Enable time zone conversion for time fields.

timezone

Timezone in the Target, e.g., +08:00 Asia/Shanghai America/New_York.

maxInSizePerQuery

Maximum number of IN clause values per query during secondary verification. Queries exceeding this limit will be automatically split.

Tips: To modify the general parameters, see General Parameters and Functions.

Connection

Basic Functions

Advanced Functions

Limits

Example

FAQ

Source

Prerequisites

Parameters

Target

Prerequisites

Parameters