Homeโ€บ๐Ÿ” Phase 1: Learn DQLโ€บModule 82 min read ยท 9/21

Migrate Log Viewer โ†’ DQL + OpenPipeline

Hands-on

Migrating Log Analysis to DQL

Log analysis moves from the classic Log Viewer to DQL queries on Grail. All existing log data is available โ€” the difference is how to query and process it.

๐Ÿ›  Try it: Ctrl+K โ†’ "Logs" โ†’ open the Logs app. Every filter you apply generates DQL in the query bar at the top.

Basic Log Queries

// All error logs in the last hour
fetch logs, from:now() - 1h
| filter loglevel == "ERROR"
| sort timestamp desc
| limit 50

// Search for specific text
fetch logs, from:now() - 1h
| filter contains(content, "connection refused")
| fields timestamp, content, dt.process.name

Log Aggregation

// Error rate over time (renders as line chart in Notebooks)
fetch logs, from:now() - 2h
| summarize total = count(), errors = countIf(loglevel == "ERROR"),
    by:{time_bucket = bin(timestamp, 15m)}
| fieldsAdd error_rate = (errors * 100.0) / total
| sort time_bucket asc

๐Ÿ›  Try it: Run the error rate query in a Notebook. Change visualization to "Line chart" โ€” you'll see error rate trending over time.

Parsing Structured Logs

When log content contains JSON, use parse to extract fields. The parsed fields use bracket notation (log[level]):

fetch logs, from:now() - 1h
| filter contains(content, "{")
| parse content, "JSON:log"
| fields timestamp, log[level], log[message], log[service]

โš ๏ธ The native log level field is loglevel (one word) โ€” NOT log.level. Use loglevel for filtering without parsing:

fetch logs | filter loglevel == "ERROR"

The log[level] bracket syntax above is only for JSON-parsed output (after parse content, "JSON:log").

Parse Patterns

DQL supports multiple parse patterns for extracting structured data from log content:

// JSON parsing โ€” extract fields from JSON logs
fetch logs, from:now() - 1h
| filter contains(content, "{")
| parse content, "JSON:log"
| fields timestamp, log[level], log[message], log[service]

// Key-value parsing โ€” extract key=value pairs
fetch logs, from:now() - 1h
| parse content, "KVP{KeyValuePair:kvp}"
| fields timestamp, kvp[status], kvp[duration]

// Regex parsing โ€” custom patterns
fetch logs, from:now() - 1h
| parse content, "LD 'status=' INT:status LD 'duration=' INT:duration 'ms'"
| fields timestamp, status, duration

Log-Based Alerting with makeTimeseries

// Create a metric from logs โ€” use this in an anomaly detector
fetch logs, from:now() - 1h
| summarize errors = countIf(loglevel == "ERROR"), total = count(),
    by:{time_bucket = bin(timestamp, 1m)}
| fieldsAdd error_pct = (toDouble(errors) / toDouble(total)) * 100

๐Ÿ’ก You can create anomaly detectors on log-derived metrics. Use makeTimeseries in the detector query to alert on log error rates, specific patterns, or volume spikes.

OpenPipeline: Parse at Ingest vs Query Time

When to Parse                Where                   Why
โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€  โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
High-volume, known format    OpenPipeline (ingest)     Parse once, query fast forever
Ad-hoc investigation         DQL parse (query time)    Flexible, no config needed
Metric extraction            OpenPipeline              Create metrics from log patterns
Routing to buckets           OpenPipeline              Different retention per log type

๐Ÿ›  Try it: Ctrl+K โ†’ "OpenPipeline" โ†’ Logs โ†’ See the default pipeline. Add a processing rule to extract a field from your logs. Compare query performance before and after.