Migrate from Tinybird Classic

Tinybird Forward is a new way of working with your data projects. It includes changes to Tinybird APIs and CLI, some of which are incompatible with Tinybird Classic.

If you want to start building new projects in TinybirdForward, see the Get started guide.

Migrate your datafiles

To migrate your datafiles, you need the Tinybird Classic CLI installed. See install Tinybird Classic CLI.

Before running the Tinybird Classic CLI commands, deactivate Tinybird Local by running tb local stop.

1

Authenticate and pull your project

Run the following commands to authenticate to your Classic workspace and pull your datafiles:

# Authenticate to your Tinybird Classic workspace
tb auth --token admin <your-email> token

# Pull your datafiles and save them in the default directories
tb pull --auto

# Deactivate the virtual environment
deactivate
2

Install the Tinybird Forward CLI

Run the following command to install the Tinybird Forward CLI and the Tinybird Local container:

curl -L https://tinybird.co | sh

See install Tinybird Forward for more information.

3

Log in and create a workspace

Run the following command to log in into Tinybird Cloud:

# Access the help menu to find the available list of regions
tb login -h

# Log into your region
tb login --host <your-region>

Follow the instructions to create a new workspace and return to the CLI.

4

Build your project

Run the following commands to start the local container and build your project, ensuring that your datafiles are compatible with Tinybird Forward:

# Start the local container
tb local start

# Build your project in watch mode
tb dev

As you develop, Tinybird rebuilds the project and notifies you of any errors.

You might need to make the following changes to your datafiles:

  • .pipe datafiles must include TYPE endpoint to be published as API endpoints, like this:

    example.pipe
    NODE my_node
    SQL >
        SELECT * FROM my_data_source
    
    TYPE endpoint
    
  • The VERSION tag in datafiles isn't supported. If any of your datafiles contain VERSION, remove it.

  • If you ingest data using the Kafka or S3 connectors, configure them in your project using .connection files. See connectors.

When you finish developing, exit the tb dev session.

5

Deploy to Tinybird Cloud

Run the following commands to deploy your project:

# Optional: deploy to Tinybird Local to validate your project locally
tb deploy

# Deploy to Tinybird Cloud
tb --cloud deploy

See deployments for more information.

6

Use the new tokens

Your project is now available in Tinybird Forward.

Before you start ingesting new data or serving requests from your new workspace, update your tokens. See authentication.

Backfill your data

If you want to backfill existing data from your Tinybird Classic workspace, follow these steps.

The process to migrate your data is currently a manual, batch process. Reach out to Tinybird support (support@tinybird.co) if you need help with the migration.

1

Duplicate streaming ingestion

If you are ingesting streaming data from Kafka or the Events API, duplicate ingestion to your new Tinybird Forward workspace. This lets you serve the same data from Classic and Forward.

In your Forward workspace, make note of the minimum timestamp as your milestone. In the next steps, you backfill data before the milestone.

2

Export to S3

In your Tinybird Classic workspace, create an S3 sink to export your data to an S3 bucket. See S3 sink.

Add a filter in your query to filter on data before the milestone:

NODE sink_node
SQL >
    SELECT ... FROM my_streaming_data
    WHERE timestamp < 'watermark_datetime'

TYPE sink
# Add remaining sink settings

This helps ensure that your project doesn't contain duplicate data between the streaming and backfill data.

If you need to export lots of data, use query parameters or a file template to break the export into smaller files.

3

Import from S3

In your workspace in Tinybird Forward, create an S3 connector and data source.

Test and deploy your changes to start ingesting your data.

4

Combine streaming and backfill data

To combine your streaming and backfill data, create two materialized pipes that feed into the same target data source:

The following is for streaming data:

materialize_streaming.pipe
NODE get_streaming_data
SQL >
    SELECT * FROM streaming_data_source
    WHERE timestamp >= 'watermark_datetime'

TYPE materialized
DATASOURCE combined_data

The following is to backfill data:

materialize_backfill.pipe
NODE get_backfill_data
SQL >
    SELECT * FROM s3_data_source
    WHERE timestamp < 'watermark_datetime'

TYPE materialized
DATASOURCE combined_data

Create the target .datasource file, test everything locally, and deploy your changes to Tinybird Cloud.

Updated