Telemetry
Firetiger ingests and stores telemetry data from your software systems. This is the data that Agents use to understand what’s happening in your infrastructure.
We use the OpenTelemetry standard for data ingestion. OpenTelemetry defines three signal types:
- Logs — textual records of events, like HTTP access logs, application errors, or audit trails.
- Traces — records of request execution across services, showing the path a request takes and where time is spent.
- Metrics — numeric measurements over time, like request rates, error counts, or CPU usage.
You can send data from any OpenTelemetry-compatible source. See Sending OpenTelemetry Data for setup instructions, or browse Integrations for source-specific guides.
How data is organized
When Firetiger receives telemetry, it organizes it by service name and time.
The service name comes from the OpenTelemetry service.name resource attribute. This is the standard way to identify which component of your system produced a piece of telemetry. If no service.name is set, data goes into a default table.
Each distinct service name gets its own table. For example, if you have services named api-gateway, billing-worker, and web-frontend, your logs will be stored in three separate tables. This keeps queries fast — when an Agent investigates an issue with your billing system, it only needs to scan the billing-worker table, not your entire log volume.
Service names are normalized when they’re received: api-gateway becomes api_gateway, MyService becomes my_service, and so on.
Structured logs
The more structure your logs have, the more useful they are to Firetiger’s agents. Plain text logs like "user u_123 logged in" work, but structured logs with typed fields are much better — they let agents write precise queries instead of parsing strings.
A good structured log record uses OpenTelemetry attributes to capture discrete facts about an event. Here’s what that looks like in practice:
An HTTP request log:
| Attribute | Value |
|---|---|
http.method |
GET |
http.route |
/api/v1/users |
http.status_code |
200 |
http.duration_ms |
42 |
user.id |
u_123 |
request.id |
req_abc |
A deployment event:
| Attribute | Value |
|---|---|
deploy.service |
billing-worker |
deploy.sha |
a1b2c3d |
deploy.environment |
production |
deploy.trigger |
merge |
A background job completion:
| Attribute | Value |
|---|---|
job.name |
sync_invoices |
job.duration_ms |
12340 |
job.status |
success |
job.records_processed |
847 |
The key principle is: if you’d want to filter, group, or aggregate on a value, make it a separate attribute rather than embedding it in a message string.
Sending structured logs with OpenTelemetry SDKs
OpenTelemetry provides SDKs for most languages. Here are a few examples of emitting structured logs:
Python (opentelemetry-python):
from opentelemetry._logs import SeverityNumber
from opentelemetry.sdk._logs import LoggerProvider, LogRecord
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
provider = LoggerProvider()
provider.add_log_record_processor(
BatchLogRecordProcessor(OTLPLogExporter(endpoint="https://ingest.example.com/v1/logs"))
)
logger = provider.get_logger("my-service")
logger.emit(LogRecord(
severity_number=SeverityNumber.INFO,
body="invoice sync completed",
attributes={
"job.name": "sync_invoices",
"job.duration_ms": 12340,
"job.status": "success",
"job.records_processed": 847,
},
))
Go (opentelemetry-go):
import "go.opentelemetry.io/otel/log"
logger := loggerProvider.Logger("my-service")
record := log.Record{}
record.SetBody(log.StringValue("invoice sync completed"))
record.SetSeverity(log.SeverityInfo)
record.AddAttributes(
log.String("job.name", "sync_invoices"),
log.Int("job.duration_ms", 12340),
log.String("job.status", "success"),
log.Int("job.records_processed", 847),
)
logger.Emit(ctx, record)
Node.js (opentelemetry-js):
import { logs, SeverityNumber } from "@opentelemetry/api-logs"
const logger = logs.getLogger("my-service")
logger.emit({
severityNumber: SeverityNumber.INFO,
body: "invoice sync completed",
attributes: {
"job.name": "sync_invoices",
"job.duration_ms": 12340,
"job.status": "success",
"job.records_processed": 847,
},
})
See Sending OpenTelemetry Data for full setup instructions including configuring exporters and the service.name resource attribute.
Schema inference
When Firetiger receives log data, it automatically infers and evolves the schema of your tables based on the data it sees. You don’t need to define schemas up front.
If your application emits logs with structured attributes (like the examples above), Firetiger will detect the types and create typed columns for each one. job.duration_ms becomes an integer column, job.status becomes a string column, and so on. As new fields appear in your data, the schema expands to accommodate them.
This also works for JSON bodies. If your logs contain a JSON string as the body, Firetiger will detect and unpack it into typed columns automatically.
Attribute names are normalized to snake_case during ingestion — invocationId becomes invocation_id, InstanceID becomes instance_id, etc.
Querying your data
The primary way to query telemetry in Firetiger is through Agents. When an agent investigates an issue, it writes and executes SQL queries against your telemetry tables on your behalf. You describe what you’re looking for in natural language, and the agent figures out the right tables, columns, and filters.
You can also query your data directly as an MCP client. This is useful for integrating Firetiger into coding agents, AI assistants, or any tool that speaks the MCP protocol. See Using Firetiger with MCP for setup instructions.
Volume and pricing
Firetiger is built to accept very high volumes of telemetry data. We don’t charge based on cardinality — you won’t be penalized for having many unique label values, high-dimensional attributes, or a large number of distinct services. Send what you need, and query what matters.