Skip to content

Logs: add Pipelines as Logpush destination#29469

Draft
Marcinthecloud wants to merge 1 commit intoproductionfrom
pipelines-logpush-destination
Draft

Logs: add Pipelines as Logpush destination#29469
Marcinthecloud wants to merge 1 commit intoproductionfrom
pipelines-logpush-destination

Conversation

@Marcinthecloud
Copy link
Copy Markdown
Contributor

Adds documentation for using Cloudflare Pipelines as a native Logpush destination, enabling SQL transformation of logs before storage in R2.

  • Add Logpush Pipelines destination how-to guide
  • Add changelog entry for Pipelines destination support

@github-actions
Copy link
Copy Markdown
Contributor

This pull request requires reviews from CODEOWNERS as it changes files that match the following patterns:

Pattern Owners
/src/content/changelog/ @cloudflare/pm-changelogs, @cloudflare/pcx-technical-writing
/src/content/docs/logs/ @soheiokamoto, @angelampcosta, @rianvdm, @cloudflare/pcx-technical-writing

@Marcinthecloud
Copy link
Copy Markdown
Contributor Author

/bonk review this, ensuring that:

  • the changelog is clear and inherits guidance from our changelog command
  • examples in the docs are valid and clear
  • examples are exhaustive and guide users towards real-world use-cases and best-practices

fix trivial examples, and make suggestions on the PR otherwise.

Copy link
Copy Markdown
Contributor

@ask-bonk ask-bonk bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well-structured documentation. Two minor style suggestions:

WHERE EdgeResponseStatus >= 400;
```

Pipelines SQL supports string functions, regex, hashing (SHA-256, MD5), JSON extraction, timestamp conversion, conditional expressions, and more. For the full list, refer to the [Pipelines SQL reference](/pipelines/sql-reference/).
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per the style guide, use 'Refer to' instead of 'See':

Suggested change
Pipelines SQL supports string functions, regex, hashing (SHA-256, MD5), JSON extraction, timestamp conversion, conditional expressions, and more. For the full list, refer to the [Pipelines SQL reference](/pipelines/sql-reference/).
Pipelines SQL supports string functions, regex, hashing (SHA-256, MD5), JSON extraction, timestamp conversion, conditional expressions, and more. For the full list, refer to the [Pipelines SQL reference](/pipelines/sql-reference/).

- Choose the file format: Raw Parquet, JSON, or [R2 Data Catalog (Apache Iceberg)](/r2/data-catalog/).
- Optionally, you can edit advanced settings such as rolling policy, partitioning, and compression. For more information about these settings, refer to the [Pipelines Sinks documentation](/pipelines/sinks/).

7. Select **Generate token** to automatically create a token that will be used for authentication to R2, R2 Data Catalog, and the Pipelines Stream.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bold the UI element:

Suggested change
7. Select **Generate token** to automatically create a token that will be used for authentication to R2, R2 Data Catalog, and the Pipelines Stream.
7. Select **Generate token** to automatically create a token that will be used for authentication to R2, R2 Data Catalog, and the Pipelines Stream.

@ask-bonk
Copy link
Copy Markdown
Contributor

ask-bonk bot commented Mar 30, 2026

Posted 2 inline suggestions on PR #29469:

  1. Changelog line 30: Change "See" to "Refer to" per style guide
  2. How-to guide line 41: Bold the Generate token UI element

The documentation is well-structured with proper frontmatter, correct component usage (Steps, DashButton), and good code examples. The sidebar order (41) appropriately places Pipelines after R2 (40).

github run


3. Select **Pipelines** as the destination.

4. Select the dataset from the dropdown. The schema for the respective dataset will automatically be applied to the Pipeline that is created.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we mention that the Pipelines creation wizard would open on clicking the create button?


For a full list of fields available in each dataset, refer to [Datasets](/logs/logpush/logpush-job/datasets/).

## Manage via the Cloudflare dashboard
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it help to have some screenshots here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants