We're in love with Tinybird Forward!¶
We’ve designed a Tinybird CLI to make managing state changes in your data infrastructure effortless. No more manual steps or error-prone deployments: just make your changes locally and run tb deploy
. Think of it as a compiler, a VM, and a deployment tool all in one.
This new local-first approach simplifies your workflow, letting you focus on shipping faster. We can't wait for you to try it out in a new workspace. Try the beta now!
New job overview¶
You can now see your complete job history in the Dashboard overview. Jobs run various operations in the background, some of them periodically, such as copy jobs, writing data to a sink. Some of them on demand, like populate operations. Previously, the dashboard visibility was limited to the last 48 hours.
With this update, you have a more comprehensive view of your job activity. You can now access job history for the past 30 days and filter the list by both job status and type.
With this update, you can now:
- Identify performance bottlenecks by sorting jobs by duration to spot which ones take the longest.
- Track issues more effectively by easily checking if any jobs encountered problems in recent days
- Monitor queue status to see if jobs are waiting to start or if operations like your latest populate are still running
In summary, we hope this new view helps you maintain a healthier data project by keeping you better informed about your job activity.
Copy operations are now safer to retry¶
Copy operations transfer data between two data sources and might include downstream materialized views, with each step copying data separately. Previously, data was inserted into the destination immediately at each step, which could lead to inconsistency if later steps failed due to memory issues or timeouts.
With this update, data is only inserted into destination data sources after all steps are successfully completed. This means:
- If any step fails, no data is inserted anywhere.
- Jobs can be safely retried without risk of data inconsistency.
S3 Connector performance and usability improvements¶
We've made improvements to the S3 Connector under the hood. Now, new files upload to the bucket that match the specific file expression are detected through SQS events, instead of having to scan the whole bucket every few minutes. The initial backfill is also more efficient now, and you can expect much better throughput - remember you can always use the tinybird.datasources_ops_log
Service Data Source to check the appended files, and the time they take.
From now on, when you create a new S3 Connection the generated AWS IAM Policy includes these new permissions about bucket notifications: s3:GetBucketNotification
and s3:PutBucketNotification
. If you're trying to reuse an existing S3 Connection, you have two options:
- You can add these permissions to the IAM role you have in AWS.
- You can create a new S3 Connection from the UI
We recommend the first option, and this is an example policy with the permission you need:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "s3:GetObject", "s3:ListBucket", "s3:GetBucketNotification", "s3:PutBucketNotification", "s3:GetBucketLocation", "s3:PutObject", "s3:PutObjectAcl" ], "Resource": [ "arn:aws:s3:::{bucket-name}", "arn:aws:s3:::{bucket-name}/*" ] } ] }
Also notice that we removed the s3:ListAllMyBuckets
permission, which concerning for some.
Burst mode for QPS at no extra cost¶
At Tinybird we learn and iterate fast based on your feedback. That's why we've introduced burst mode for queries per second (QPS) at no extra cost. New limits are also more generous now.
Your operations can now take x2 QPS per second allowed in your plan for API endpoint requests or SQL queries. See Burst mode for details and examples.
New doc on billing concepts¶
What are active minutes? And queries per second? How does burst mode work? To answer all these questions following the launch of new plans, we've added a new doc on billing concepts.
Improvements and bug fixes¶
- Improved the way we estimate plan size when migrating or upgrading to a new plan.
- Improved error messages when the cluster is under heavy load.