Tinybird React library for in-product analytics
BETA
¶We are excited to announce a major upgrade to our Tinybird React library @tinybirdco/charts, designed to empower frontend developers with robust tools for building in-product analytics. This update significantly enhances the library, offering everything you need to create rich charts and better user experiences with ease.
Key features:
- Direct data querying: A new hook allows you to directly query data from Tinybird, streamlining the process of fetching and displaying data.
- Third-party integration: Integration with any third-party chart library or custom component, offering great flexibility.
- Ready-to-use components: 6 ready-to-use Tinybird Charts and a table component, enabling quick and easy data visualization.
- Total customization control: Full control over Tinybird Charts customization to fit your unique needs.
- Polling: Refresh your data with periodic updates for real-time data needs like trading charts.
- ChartProvider: Share styling and query configurations across multiple charts for a consistent look and feel.
- State control: Total control over the state of your charts (loaded, loading, error...).
- Token management: An exposed fetcher simplifies token management.
Check out the Charts documentation to get started.
Create a Pipe from a Playground¶
Playgrounds are a great way to explore your data. You can do one-time queries and not mess up your data project. They're also used to develop Pipes in a sandbox way. Tinybird users often need them to become Pipes so they can grow into being an API Endpoint, a Materialized View... and fulfill their real calling.
Now, you can duplicate any Playground into a Pipe with just a couple clicks:
Specify rate limit for JWT tokens¶
You can now specify a rate limit of maximum requests per second when defining a JWT token. This is a particularly useful safety net to stop any published Endpoints from accidentally blowing up your Workspace usage! After this limit is reached, any new requests receive a 429 response code.
Read more about it in the "Rate limits for JWTs" docs.
Copy Pipe mode update¶
Previously, when performing a Copy Pipe, it was always created in "append" mode, adding new results to the existing Data Source. Now, during the creation process, you can choose between "Append only new data" or "Replace all data".
Additionally, you can modify the mode from the copy Pipe options, even for those created previously.
Bool and Decimal types are now supported¶
We've added support for the following types in our Data Sources:
Bool
: a field withtrue
/false
as possible values.Decimal(P,S)
/Decimal32(S)
/Decimal64(S)
/Decimal128(S)
/Decimal256(S)
: these fields can precisely store signed decimal values with up to 76 digits, including fractional ones. The P parameter defines the total number of digits, while S sets the number of digits for the fractional part.
Schema example using Bool and Decimal types
SCHEMA > `bool_value` Bool `json:$.bool_value`, `decimal_value` Decimal(20,9) `json:$.decimal_value`
See the "Supported data types" docs for more information and limitations to be aware of.
Support for default values in NDJSON Data Sources¶
To improve the ingestion of NDJSON Data Sources, we've added support for default values, in the same way we support CSV Data Sources.
To define a default value for a column in an NDJSON Data Source, use the DEFAULT
keyword in the schema definition after the JSONPath. Here's an example:
SCHEMA > `timestamp` DateTime `json:$.timestamp` DEFAULT now(), `string_value` String `json:$.string_value` DEFAULT '-', `int_value` Int32 `json:$.int_value` DEFAULT 1
If a column has a default value defined, the row won't be sent to quarantine if that field is missing in the JSON object or it has a null value.
It's now also possible to alter the schema of an existing NDJSON Data Source to add default values, adding it to the datafile and using the command tb push --force
.
Confirm before saving in Time Series¶
To avoid accidental changes in your Time Series, we've added a confirmation banner when saving changes in the configuration. This way, you can review the changes before applying them or duplicate the Time Series to create a new one.