Service Data Sources¶
Tinybird provides Service Data Sources that you can use to inspect what's going on in your Tinybird account, diagnose issues, monitor usage, and so on.
For example, you can get real time stats about API calls or a log of every operation over your Data Sources. This is similar to using system tables in a database, although Service Data Sources contain information about the usage of the service itself.
Queries made to Service Data Sources are free of charge and don't count towards your usage. However, calls to API Endpoints that use Service Data Sources do count towards API rate limits. See Billing.
Considerations¶
- You can't use Service Data Sources in Materialized View queries.
- Pass dynamic query parameters to API Endpoints to then query Service Data Sources.
- You can only query Organization-level Service Data Sources if you're an administrator. See Consumption overview.
Service Data Sources¶
The following Service Data Sources are available.
tinybird.pipe_stats_rt¶
Contains information about all requests made to your API Endpoints in real time.
This Data Source has a TTL of 7 days. If you need to query data older than 7 days you must use the aggregated by day data available at tinybird.pipe_stats.
Field | Type | Description |
---|---|---|
start_datetime | DateTime | API call start date and time. |
pipe_id | String | Pipe Id as returned in our Pipes API (query_api in case it is a Query API request). |
pipe_name | String | Pipe name as returned in our Pipes API (query_api in case it is a Query API request). |
duration | Float | API call duration in seconds. |
read_bytes | UInt64 | API call read data in bytes. |
read_rows | UInt64 | API call rows read. |
result_rows | UInt64 | Rows returned by the API call. |
url | String | URL (token param is removed for security reasons). |
error | UInt8 | 1 if query returned error, else 0 . |
request_id | String | API call identifier returned in x-request-id header. Format is ULID string. |
token | String | API call token identifier used. |
token_name | String | API call token name used. |
status_code | Int32 | API call returned status code. |
method | String | API call method POST or GET. |
parameters | Map(String, String) | API call parameters used. |
release | String | Semantic version of the release (deprecated). |
user_agent | Nullable(String) | User Agent HTTP header from the request. |
resource_tags | Array(String) | Tags associated with the Pipe when the request was made. |
tinybird.pipe_stats¶
Aggregates the request stats in tinybird.pipe_stats_rt by day.
Field | Type | Description |
---|---|---|
date | Date | Request date and time. |
pipe_id | String | Pipe Id as returned in our Pipes API. |
pipe_name | String | Name of the Pipe. |
view_count | UInt64 | Request count. |
error_count | UInt64 | Number of requests with error. |
avg_duration_state | AggregateFunction(avg, Float32) | Average duration state in seconds (see Querying _state columns). |
quantile_timing_state | AggregateFunction(quantilesTiming(0.9, 0.95, 0.99), Float64) | 0.9, 0.95 and 0.99 quantiles state. Time in milliseconds (see Querying _state columns). |
read_bytes_sum | UInt64 | Total bytes read. |
read_rows_sum | UInt64 | Total rows read. |
resource_tags | Array(String) | All the tags associated with the resource when the aggregated requests were made. |
tinybird.bi_stats_rt¶
Contains information about all requests to your BI Connector interface in real time.
This Data Source has a TTL of 7 days. If you need to query data older than 7 days you must use the aggregated by day data available at tinybird.bi_stats.
Field | Type | Description |
---|---|---|
start_datetime | DateTime | Query start timestamp. |
query | String | Executed query. |
query_normalized | String | Normalized executed query. This is the pattern of the query, without literals. Useful to analyze usage patterns. |
error_code | Int32 | Error code, if any. 0 on normal execution. |
error | String | Error description, if any. Empty otherwise. |
duration | UInt64 | Query duration in milliseconds. |
read_rows | UInt64 | Read rows. |
read_bytes | UInt64 | Read bytes. |
result_rows | UInt64 | Total rows returned. |
result_bytes | UInt64 | Total bytes returned. |
tinybird.bi_stats¶
Aggregates the stats in tinybird.bi_stats_rt by day.
Field | Type | Description |
---|---|---|
date | Date | Stats date. |
database | String | Database identifier. |
query_normalized | String | Normalized executed query. This is the pattern of the query, without literals. Useful to analyze usage patterns. |
view_count | UInt64 | Requests count. |
error_count | UInt64 | Error count. |
avg_duration_state | AggregateFunction(avg, Float32) | Average duration state in milliseconds (see Querying _state columns). |
quantile_timing_state | AggregateFunction(quantilesTiming(0.9, 0.95, 0.99), Float64) | 0.9, 0.95 and 0.99 quantiles state. Time in milliseconds (see Querying _state columns). |
read_bytes_sum | UInt64 | Total bytes read. |
read_rows_sum | UInt64 | Total rows read. |
avg_result_rows_state | AggregateFunction(avg, Float32) | Total bytes returned state (see Querying _state columns). |
avg_result_bytes_state | AggregateFunction(avg, Float32) | Total rows returned state (see Querying _state columns). |
tinybird.block_log¶
The Data Source contains details about how Tinybird ingests data into your Data Sources. You can use this Service Data Source to spot problematic parts of your data.
Field | Type | Description |
---|---|---|
timestamp | DateTime | Date and time of the block ingestion. |
import_id | String | Id of the import operation. |
job_id | Nullable(String) | Id of the job that ingested the block of data, if it was ingested by URL. In this case, import_id and job_id must have the same value. |
request_id | String | Id of the request that performed the operation. In this case, import_id and job_id must have the same value. Format is ULID string. |
source | String | Either the URL or stream or body keywords. |
block_id | String | Block identifier. You can cross this with the blocks_ids column from the tinybird.datasources_ops_log Service Data Source. |
status | String | done | error . |
datasource_id | String | Data Source consistent id. |
datasource_name | String | Data Source name when the block was ingested. |
start_offset | Nullable(Int64) | The starting byte of the block, if the ingestion was split, where this block started. |
end_offset | Nullable(Int64) | If split, the ending byte of the block. |
rows | Nullable(Int32) | How many rows it ingested. |
parser | Nullable(String) | Whether the native block parser or falling back to row by row parsing is used. |
quarantine_lines | Nullable(UInt32) | If any, how many rows went into the quarantine Data Source. |
empty_lines | Nullable(UInt32) | If any, how many empty lines were skipped. |
bytes | Nullable(UInt32) | How many bytes the block had. |
processing_time | Nullable(Float32) | How long it took in seconds. |
processing_error | Nullable(String) | Detailed message in case of error. |
When Tinybird ingests data from a URL, it splits the download in several requests, resulting in different ingestion blocks. The same happens when the data upload happens with a multipart request.
tinybird.datasources_ops_log¶
Contains all operations performed to your Data Sources. Tinybird tracks the following operations:
Event | Description | |
---|---|---|
create | A Data Source is created. | |
sync-dynamodb | Initial synchronization from a DynamoDB table when using the DynamoDB Connector | |
append | Append operation. | |
append-hfi | Append operation using the High-frequency Ingestion API. | |
append-kafka | Append operation using the Kafka Connector. | |
append-dynamodb | Append operation using the DynamoDB Connector | |
replace | A replace operation took place in the Data Source. | |
delete | A delete operation took place in the Data Source. | |
truncate | A truncate operation took place in the Data Source. | |
rename | The Data Source was renamed. | |
populateview-queued | A populate operation was queued for execution. | |
populateview | A finished populate operation (up to 8 hours after it started). | |
copy | A copy operation took place in the Data Source. | |
alter | An alter operation took place in the Data Source. | |
resource_tags | Array(String) | Tags associated with the Pipe when the request was made. |
Materializations are logged with same event_type
and operation_id
as the operation that triggers them. You can track the materialization Pipe with pipe_id
and pipe_name
.
Tinybird logs all operations with the following information in this Data Source:
Field | Type | Description |
---|---|---|
timestamp | DateTime | Date and time when the operation started. |
event_type | String | Operation being logged. |
operation_id | String | Groups rows affected by the same operation. Useful for checking materializations triggered by an append operation. |
datasource_id | String | Id of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes. |
datasource_name | String | Name of your Data Source when the operation happened. |
result | String | ok | error |
elapsed_time | Float32 | How much time the operation took in seconds. |
error | Nullable(String) | Detailed error message if the result was error. |
import_id | Nullable(String) | Id of the import operation, if data has been ingested using one of the following operations: create , append or replace |
job_id | Nullable(String) | Id of the job that performed the operation, if any. If data has been ingested, import_id and job_id must have the same value. |
request_id | String | Id of the request that performed the operation. If data has been ingested, import_id and request_id must have the same value. Format is ULID string. |
rows | Nullable(UInt64) | How many rows the operations affected. This depends on event_type : for the append event, how many rows got inserted; for delete or truncate events, how many rows the Data Source had; for replace , how many rows the Data Source has after the operation. |
rows_quarantine | Nullable(UInt64) | How many rows went into the quarantine Data Source, if any. |
blocks_ids | Array(String) | List of blocks ids used for the operation. See the tinybird.block_log Service Data Source for more details. |
options | Nested(Names String, Values String) | Tinybird stores key-value pairs with extra information for some operations. For the replace event, Tinybird uses the rows_before_replace key to track how many rows the Data Source had before the replacement happened, the replace_condition key shows what condition was used. For append and replace events, Tinybird stores the data source , for example the URL, or body/stream keywords. For rename event, old_name and new_name . For populateview you can find there the whole populate job metadata as a JSON string. For alter events, Tinybird stores operations , and dependent pipes as dependencies if they exist. |
read_bytes | UInt64 | Read bytes in the operation. |
read_rows | UInt64 | Read rows in the operation. |
written_rows | UInt64 | Written rows in the operation. |
written_bytes | UInt64 | Written bytes in the operation. |
written_rows_quarantine | UInt64 | Quarantined rows in the operation. |
written_bytes_quarantine | UInt64 | Quarantined bytes in the operation. |
pipe_id | String | If present, materialization Pipe id as returned in our Pipes API. |
pipe_name | String | If present, materialization Pipe name as returned in our Pipes API. |
release | String | Semantic version of the release (deprecated). |
tinybird.datasource_ops_stats¶
Data from datasource_ops_log
, aggregated by day.
Field | Type | Description | |||
---|---|---|---|---|---|
event_date | Date | Date of the event. | |||
workspace_id | String | Unique identifier for the Workspace. | |||
event_type | String | Name of your Data Source. | |||
pipe_id | String | Identifier of the Pipe. | |||
pipe_name | String | Name of the Pipe. | |||
error_count | UInt64 | Number of requests with an error. | |||
executions | UInt64 | Number of executions. | |||
avg_elapsed_time_state | Float32 | Average time spent in elapsed state. | |||
quantiles_state | Float32 | 0.9, 0.95 and 0.99 quantiles state. Time in milliseconds (see Querying _state columns). | |||
read_bytes | UInt64 | Read bytes in the operation. | |||
read_rows | UInt64 | Read rows in the Sink operation. | |||
written_rows | UInt64 | Written rows in the Sink operation. | |||
read_bytes | UInt64 | Read bytes in the operation. | |||
written_bytes | UInt64 | Written bytes in the operation. | written_rows_quarantine | UInt64 | Quarantined rows in the operation. |
written_bytes_quarantine | UInt64 | Quarantined bytes in the operation. | |||
resource_tags | Array(String) | Tags associated with the Pipe when the request was made. |
tinybird.endpoint_errors¶
It provides the last 30 days errors of your published endpoints. Tinybird logs all errors with additional information in this Data Source.
Field | Type | Description | |||
---|---|---|---|---|---|
start_datetime | DateTime | Date and time when the API call started. | |||
request_id | String | The id of the request that performed the operation. Format is ULID string. | |||
pipe_id | String | If present, Pipe id as returned in our Pipes API. | |||
pipe_name | String | If present, Pipe name as returned in our Pipes API. | |||
params | Nullable(String) | URL query params included in the request. | |||
url | Nullable(String) | URL pathname. | |||
status_code | Nullable(Int32) | HTTP error code. | |||
error | Nullable(String) | Error message. | resource_tags | Array(String) | Tags associated with the Pipe when the request was made. |
tinybird.kafka_ops_log¶
Contains all operations performed to your Kafka Data Sources during the last 30 days.
Field | Type | Description |
---|---|---|
timestamp | DateTime | Date and time when the operation took place. |
datasource_id | String | Id of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes. |
topic | String | Kafka topic. |
partition | Int16 | Partition number, or -1 for all partitions. |
msg_type | String | 'info' for regular messages, 'warning' for issues related to the user's Kafka cluster, deserialization or Materialized Views, and 'error' for other issues. |
lag | Int64 | Number of messages behind for the partition. This is the difference between the high-water mark and the last commit offset. |
processed_messages | Int32 | Messages processed for a topic and partition. |
processed_bytes | Int32 | Amount of bytes processed. |
committed_messages | Int32 | Messages ingested for a topic and partition. |
msg | String | Information in the case of warnings or errors. Empty otherwise. |
tinybird.datasources_storage¶
Contains stats about your Data Sources storage.
Tinybird logs maximum values per hour, the same as when it calculates storage consumption.
Field | Type | Description |
---|---|---|
datasource_id | String | Id of your Data Source. The Data Source id is consistent after renaming operations. You should use the id when you want to track name changes. |
datasource_name | String | Name of your Data Source. |
timestamp | DateTime | When storage was tracked. By hour. |
bytes | UInt64 | Max number of bytes the Data Source has, not including quarantine. |
rows | UInt64 | Max number of rows the Data Source has, not including quarantine. |
bytes_quarantine | UInt64 | Max number of bytes the Data Source has in quarantine. |
rows_quarantine | UInt64 | Max number of rows the Data Source has in quarantine. |
tinybird.releases_log (deprecated)¶
Contains operations performed to your releases. Tinybird tracks the following operations:
Event | Description |
---|---|
init | First Release is created on Git sync. |
override | Release commit is overridden. tb init --override-commit {{commit}} . |
deploy | Resources from a commit are deployed to a Release. |
preview | Release status is changed to preview. |
promote | Release status is changed to live. |
post | Resources from a commit are deployed to the live Release. |
rollback | Rollback is done a previous Release is now live. |
delete | Release is deleted. |
Tinybird logs all operations with additional information in this Data Source.
Field | Type | Description |
timestamp | DateTime64 | Date and time when the operation took place. |
event_type | String | Name of your Data Source. |
semver | String | Semantic version identifies a release. |
commit | String | Git sha commit related to the operation. |
token | String | API call token identifier used. |
token_name | String | API call token name used. |
result | String | ok | error |
error | String | Detailed error message |
tinybird.sinks_ops_log¶
Contains all operations performed to your Sink Pipes.
Field | Type | Description |
timestamp | DateTime64 | Date and time when the operation took place. |
service | LowCardinality(String) | Type of Sink (GCS, S3, and so on) |
pipe_id | String | The ID of the Sink Pipe. |
pipe_name | String | the name of the Sink Pipe. |
token_name | String | Token name used. |
result | LowCardinality(String) | ok | error |
error | Nullable(String) | Detailed error message |
elapsed_time | Float64 | The duration of the operation in seconds. |
job_id | Nullable(String) | ID of the job that performed the operation, if any. |
read_rows | UInt64 | Read rows in the Sink operation. |
written_rows | UInt64 | Written rows in the Sink operation. |
read_bytes | UInt64 | Read bytes in the operation. |
written_bytes | UInt64 | Written bytes in the operation. |
output | Array(String) | The outputs of the operation. In the case of writing to a bucket, the name of the written files. |
parameters | Map(String, String) | The parameters used. Useful to debug the parameter query values. |
options | Map(String, String) | Extra information. You can access the values with options['key'] where key is one of: file_template, file_format, file_compression, bucket_path, execution_type. |
tinybird.data_transfer¶
Stats of data transferred per hour by a Workspace.
Field | Type | Description |
timestamp | DateTime | Date and time data transferred is tracked. By hour. |
event | LowCardinality(String) | Type of operation generated the data (ie. sink ) |
origin_provider | LowCardinality(String) | Provider data was transferred from. |
origin_region | LowCardinality(String) | Region data was transferred from. |
destination_provider | LowCardinality(String) | Provider data was transferred to. |
destination_region | LowCardinality(String) | Region data was transferred to. |
kind | LowCardinality(String) | intra | inter depending if the data moves within or outside the region. |
tinybird.jobs_log¶
Contains all job executions performed in your Workspace. Tinybird logs all jobs with extra information in this Data Source:
Field | Type | Description |
---|---|---|
job_id | String | Unique identifier for the job. |
job_type | LowCardinality(String) | Type of job execution. delete_data , import , populateview , query , copy , copy_from_main , copy_from_branch , data_branch , deploy_branch , regression_tests , sink , sink_from_branch . |
workspace_id | String | Unique identifier for the Workspace. |
pipe_id | String | Unique identifier for the Pipe. |
pipe_name | String | Name of the Pipe. |
created_at | DateTime | Timestamp when the job was created. |
updated_at | DateTime | Timestamp when the job was last updated. |
started_at | DateTime | Timestamp when the job execution started. |
status | LowCardinality(String) | Current status of the job. waiting , working , done , error , cancelled . |
error | Nullable(String) | Detailed error message if the result was error. |
job_metadata | JSON String | Additional metadata related to the job execution. |
Learn more about how to track background jobs execution in the Jobs monitoring guide.
Use resource_tags to better track usage¶
You can use tags that you've added to your resources, like Pipes or Data Sources, to analyze usage and cost attribution across your organization.
For example, you can add tags for projects, environments, or versions and compare usage in later queries to Service Data Sources such as tinybird.datasources_ops_stats, which aggregates operations data by day.
The following Service Data Sources support resource_tags
:
pipe_stats_rt
pipe_stats
endpoint_errors
organization.pipe_stats_rt
organization.pipe_stats
datasources_ops_log
datasources_ops_stats
organization.datasources_ops_log
organization.datasources_ops_stats
To add tags to resources, see Organizing resources in Workspaces.
Query _state columns¶
Several of the Service Data Sources include columns suffixed with _state
. This suffix identifies columns with values that are in an intermediate aggregated state. When reading these columns, merge the intermediate states to get the final value.
To merge intermediate states, wrap the column in the original aggregation function and apply the -Merge
combinator.
For example, to finalize the value of the avg_duration_state
column, you use the avgMerge
function:
finalize the value for the avg_duration_state column
SELECT date, avgMerge(avg_duration_state) avg_time, quantilesTimingMerge(0.9, 0.95, 0.99)(quantile_timing_state) quantiles_timing_in_ms_array FROM tinybird.pipe_stats where pipe_id = 'PIPE_ID' group by date
See Combinators to learn more about the -Merge
combinator.
Organization Service Data Sources¶
The following is a complete list of available Organization Service Data Sources:
Field | Description |
---|---|
organization.workspaces | Lists all Organization Workspaces and related information, including name, IDs, databases, plan, when it was created, and whether it has been soft-deleted. |
organization.processed_data | Information related to all processed data per day per workspace. |
organization.datasources_storage | Equivalent to tinybird.datasources_storage but with data for all Organization Workspaces. |
organization.pipe_stats | Equivalent to tinybird.pipe_stats but with data for all Organization Workspaces. |
organization.pipe_stats_rt | Equivalent to tinybird.pipe_stats_rt but with data for all Organization Workspaces. |
organization.datasources_ops_log | Equivalent to tinybird.datasources_ops_log but with data for all Organization Workspaces. |
organization.data_transfer | Equivalent to tinybird.data_transfer but with data for all Organization Workspaces. |
organization.jobs_log | Equivalent to tinybird.jobs_log but with data for all Organization Workspaces. |
organization.sinks_ops_log | Equivalent to tinybird.sinks_ops_log but with data for all Organization Workspaces. |
organization.bi_stats | Equivalent to tinybird.bi_stats but with data for all Organization Workspaces. |
organization.bi_stats_rt | Equivalent to tinybird.bi_stats_rt but with data for all Organization Workspaces. |
To query Organization Service Data Sources, go to any Workspace that belongs to the Organization and use the previous as regular Service Data Source from the Playground or within Pipes. Use the admin Token of an Organization Admin. You can also copy your admin Token and make queries using your preferred method, like
tb sql`.
metrics_logs Service Data Source¶
The metrics_logs
Service Data Source is available in all the organization's workspaces. As with the rest of Organization Service Data Sources, it's only available to Organization administrators. New records for each of the metrics monitored are added every minute with the following schema:
Field | Type | Description |
---|---|---|
timestamp | DateTime | Timestamp of the metric |
cluster | LowCardinality(String) | Name of the cluster |
host | LowCardinality(String) | Name of the host |
metric | LowCardinality(String) | Name of the metric |
value | String | Value of the metric |
description | LowCardinality(String) | Description of the metric |
organization_id | String | ID of your organization |
The available metrics are the following:
Metric | Description |
---|---|
MemoryTracking | Total amount of memory, in bytes, allocated by the server. |
OSMemoryTotal | The total amount of memory on the host system, in bytes. |
InstanceType | Instance type of the host. |
Query | Number of executing queries. |
NumberCPU | Number of CPUs. |
LoadAverage1 | The whole system load, averaged with exponential smoothing over 1 minute. The load represents the number of threads across all the processes (the scheduling entities of the OS kernel), that are currently running by CPU or waiting for IO, or ready to run but not being scheduled at this point of time. This number includes all the processes, not only the server. The number can be greater than the number of CPU cores, if the system is overloaded, and many processes are ready to run but waiting for CPU or IO. |
LoadAverage15 | The whole system load, averaged with exponential smoothing over 15 minutes. The load represents the number of threads across all the processes (the scheduling entities of the OS kernel), that are currently running by CPU or waiting for IO, or ready to run but not being scheduled at this point of time. This number includes all the processes, not only the server. The number can be greater than the number of CPU cores, if the system is overloaded, and many processes are ready to run but waiting for CPU or IO. |
CPUUsage | The ratio of time the CPU core was running OS kernel (system) code or userspace code. This is a system-wide metric, it includes all the processes on the host machine, not just the server. This includes also the time when the CPU was under-utilized due to the reasons internal to the CPU (memory loads, pipeline stalls, branch mispredictions, running another SMT core). |