Migrate from Tinybird Classic¶
Tinybird Forward is a new way of working with your data projects. It includes changes to Tinybird APIs and CLI, some of which are incompatible with Tinybird Classic.
If you want to start building new projects in Tinybird Forward, see the Get started guide.
Considerations before migrating¶
Before migrating your data project from Tinybird Classic, understand these key differences in Tinybird Forward:
- Development happens locally using the Tinybird Local container, not in the UI.
- The following features are currently not supported:
- DynamoDB connector
- Sinks
- BI Connector
- Shared data sources
TYPE
endpoint in datafilesVERSION
tag in datafiles
- CI/CD workflows use different CLI commands than Classic. See CI/CD.
If these changes work for your use case, continue reading to learn how to migrate.
Migration is only available for Free or Developer plan users. If you're an Enterprise customer, contact Tinybird support (support@tinybird.co) to discuss migration options.
Migrate your datafiles¶
To migrate your datafiles, you need the Tinybird Classic CLI installed. See install Tinybird Classic CLI.
Before running the Tinybird Classic CLI commands, deactivate Tinybird Local by running tb local stop
.
Authenticate and pull your project¶
In a virtual environment with the Tinybird Classic CLI installed, run the following commands to authenticate to your Tinybird Classic workspace and pull your datafiles:
# Authenticate to your Tinybird Classic workspace tb auth --token [admin <your-email> token] # Pull your datafiles and save them in the default directories tb pull --auto # Deactivate the virtual environment deactivate
Install the Tinybird Forward CLI¶
Run the following command to install the Tinybird Forward CLI and the Tinybird Local container:
curl https://tinybird.co | sh
See install Tinybird Forward for more information.
Log in and create a workspace¶
Run the following command to log in to Tinybird Cloud:
# Access the help menu to find the available list of regions tb login -h # Log in to your region tb login --host <your-region>
Follow the instructions to create a new workspace and return to the CLI.
Build your project¶
Run the following commands to start the local container and build your project, ensuring that your datafiles are compatible with Tinybird Forward:
# Start the local container tb local start # Build your project in watch mode tb dev
As you develop, Tinybird rebuilds the project and notifies you of any errors.
You might need to make the following changes to your datafiles:
- .pipe datafiles must include
TYPE endpoint
to be published as API endpoints, like this:
example.pipe
NODE my_node SQL > SELECT * FROM my_data_source TYPE endpoint
- The
VERSION
tag in datafiles isn't supported. If any of your datafiles containVERSION
, remove it. - If you ingest data using the Kafka or S3 connectors, configure them in your project using .connection files. See connectors.
When you finish developing, exit the tb dev
session.
Deploy to Tinybird Cloud¶
Run the following commands to deploy your project:
# Optional: deploy to Tinybird Local to validate your project locally tb deploy # Deploy to Tinybird Cloud tb --cloud deploy
See deployments for more information.
Your project is now set up in Tinybird Forward, but it doesn't contain data. The next section explains how to backfill data from your Tinybird Classic workspace.
Use the new tokens¶
Before you start ingesting new data or serving requests from your new workspace, update your tokens. See authentication.
Backfill your data¶
If you want to backfill existing data from your Tinybird Classic workspace, follow these steps.
See Backfill from external sources for tips on how to optimize backfills.
The process to migrate your data is currently a manual, batch process. Reach out to Tinybird support (support@tinybird.co) if you need help with the migration.
Duplicate streaming ingestion¶
If you are ingesting streaming data from Kafka or the Events API, duplicate ingestion to the landing data source in your new Tinybird Forward workspace. This lets you serve the same data from Classic and Forward.
In your Tinybird Forward workspace, identify the minimum timestamp to use as your milestone. In the next steps, you backfill data from before the milestone.
Export to S3¶
In your Tinybird Classic workspace, create an S3 sink to export the data from the landing data source to an S3 bucket. Use Parquet format for optimal performance.
Add a filter in your query to select data from before the milestone:
NODE sink_node SQL > SELECT ... FROM my_streaming_data WHERE timestamp < 'milestone_datetime' TYPE sink # Add remaining sink settings
This ensures that your project doesn't contain duplicate records when combining streaming and backfill data.
If your query exceeds the sink limits, use query parameters or a file template to break the export into smaller files.
Import from S3¶
In your Tinybird Forward workspace, create an S3 connector and data source isolated from downstream resources. Keeping ingestion and transformation separate simplifies the process.
Test and deploy your changes to start ingesting your data.
Combine streaming and backfill data¶
After importing your backfill data from S3, you need to merge it with your real-time streaming data. You can do this in two ways:
1. Use a copy pipe¶
Create a copy pipe and set the streaming data source as the destination.
Copy jobs are atomic and give you better control over the backfill process.
If your data exceeds the copy limits, break the backfill into smaller jobs using query parameters to filter by date range or tenant ID.
This approach is best for projects with large data volumes and many downstream dependencies (e.g. multiple complex materialized views).
2. Use a materialized view¶
Create a materialized pipe and set the streaming data source as the destination.
This approach is best for projects with few downstream dependencies.
Test everything locally, and deploy your changes to Tinybird Cloud.