Massive update after a busy launch week #2¶
Here is the 3 things you can't miss¶
- We released the new native Snowflake connector which provides a very similar experience to the Google BigQuery one.
- Newly created columns in Snowflake or Bigquery are also added automatically to Tinybird connected Data Sources.
- You can generate mock data directly from your Tinybird dashboard using GPT, so no more excuses to try your new ideas before even having your data ready.
Other notable changes ¶
- Made it easier to connect your Confluent Kafka Streams using the Confluent connector.
- Data Copy operations are now atomic which ensures data is copied when it should be copied.
- Improved error feedback on the CLI when failing creating a new Workspace and when pushing a Pipe with an empty SQL Node.
- Ensured that the OpenAPI schema we generate when creating an API Endpoint is now valid as an OpenAPI 3.0 schema.
- We added the option to select advanced settings like the engine or the sorting key when creating a Data Source from the UI.
- Destructive actions in the UI now require an extra confirmation step. No more unintentional breaks.
- We did some usability improvements in the Auth Tokens page.
- We fixed some inconsistencies in the number of Workspaces a Data Source is being shared with.
- We fixed the date filter in the Time Series public view.