
Data Platform



Iterate seamlessly
defines resources in GitHub with version control and easy deployments.
“With all the features considered, Tinybird was the best option for us. We really liked the fact that we could define the resources in the GitHub repository and then have multiple people working on it with version control. The ease of deploying things to the workspace - everything worked out of the box very well.”

Malu Soares
Software Engineer at Framer

Zero downtime migrations
schema-migration-plan.md
# DIY ClickHouse Schema Migration
2ClickHouse performance
4Flexible schema
5- Stop ingestion before migration6- Manually CREATE new tables7- Write scripts to backfill data8- Rebuild materialized views from scratch9- Coordinate downtime with your team10- Hope nothing breaks# Tinybird Deployments
2ClickHouse performance
4Flexible schema
5+ Zero downtime, always6+ Automatic cross-version bridging7+ Smart backfills with Forward Queries8+ Materialized views handled automatically9+ Staging + pre-deploy checks10+ Instant rollback on failure11+ On-demand compute for large migrations

How it works
Schema evolution

Tinybird supports adding/removing columns, changing column types, modifying sorting keys, updating partition keys, changing TTL settings, adding/updating materialized views, and modifying engine settings. All changes are handled through the deployment system with automatic backfills.
Your data remains fully available and consistent throughout the entire deployment. Tinybird creates temporary real-time tables to capture incoming data and UNION ALL read views to serve queries from both old and new tables simultaneously. Once the migration completes, everything is swapped atomically.
A Forward Query is a SQL SELECT statement that tells Tinybird how to transform existing data to match a new schema. For example, when converting a column from String to UUID type, you provide a Forward Query with the appropriate CAST expression. Forward Queries are required when schema changes are incompatible with existing data.
Migration duration depends on data volume and the nature of changes. For small datasets, deployments complete in seconds. For large datasets (50GB+ or 100M+ rows), Tinybird automatically provisions dedicated on-demand compute instances to handle the migration without affecting production workloads.
Yes. You can use `tb deploy --check` to validate changes before deployment, and staging deployments to test with real data. You can even ingest data into staging and query staging endpoints to verify behavior before promoting to live.
If a deployment fails at any point, Tinybird automatically discards the staging deployment and maintains the live version. Your production data and endpoints remain completely unaffected. You can inspect the failure, fix the issue, and redeploy.
No. Tinybird's deployment algorithm is smart about what needs to migrate. If only a downstream materialized view changes, the upstream landing table stays untouched via cross-version bridging. Only the affected parts of the ingestion chain get new versions and backfills.
When schema changes affect tables with materialized views, Tinybird creates bridging MVs between deployment versions to ensure new data flows correctly to both old and new tables. Downstream tables are backfilled using either the MV query or an explicit Forward Query you provide.
Yes. Tinybird's CLI (`tb`) integrates with any CI/CD system. You define your data project as code (datasource and pipe files), commit changes to Git, and deploy via `tb deploy` in your pipeline. Pre-deploy checks (`--check`) can be used as validation gates.
On-demand compute is billed by core-minutes at regional rates. For example, a 2-hour migration using a 64-core instance in AWS US East 1 costs approximately 22.3 credits. Compute is only used when migrations exceed 50GB or 100M rows.

