+
+ `ps_data__`
+
+ |
+ This contains the data for each "table", in JSON format. Since JSON is being used, this table's schema does not change when columns are added, removed or changed in the Sync Streams (or legacy Sync Rules) and client-side schema.
+ |
+
+
+
+ `ps_data_local__`
+
+ |
+ Same as the previous point, but for [local-only](/client-sdks/advanced/local-only-usage) tables.
+ |
+
+
+
+ `` (`VIEW`)
+
+ |
+ These are views on the above `ps_data` tables, with each defined column in the client-side schema extracted from the JSON. For example, a `description` text column would be `CAST(data ->> '$.description' as TEXT)`.
+ |
+
+
+ |
+ `ps_untyped`
+ |
+
+ Any synced table that is not defined in the client-side schema is placed here. If the table is added to the schema at a later point, the data is then migrated to `ps_data__`.
+
+
+
+ |
+ `ps_oplog`
+ |
+
+ This is the operation history data as received from the [PowerSync Service](/architecture/powersync-service), grouped per bucket.
+ |
+
+
+ |
+ `ps_crud`
+ |
+
+ The client-side upload queue (see [Writing Data](#writing-data-via-sqlite-database-and-upload-queue) below)
+ |
+
+
+ |
+ `ps_buckets`
+ |
+
+ A small amount of metadata for each bucket.
+ |
+
+
+ |
+ `ps_migrations`
+ |
+
+ Table keeping track of Client SDK schema migrations.
+ |
+
+
+
Most rows will be present in at least two tables — the `ps_data__` table, and in `ps_oplog`.
The copy of the row in `ps_oplog` may be newer than the one in `ps_data__`. This is because of the checkpoint system in PowerSync that gives the system its consistency properties. When a full [checkpoint](/architecture/consistency) has been downloaded, data is copied over from `ps_oplog` to the individual `ps_data__` tables.
-It is possible for different [buckets](/architecture/powersync-service#bucket-system) in Sync Rules to include overlapping data (for example, if multiple buckets query data from the same table). If rows with the same table and ID have been synced via multiple buckets, it may be present multiple times in `ps_oplog`, but only one will be preserved in the `ps_data__` table (the one with the highest `op_id`).
+It is possible for different [buckets](/architecture/powersync-service#bucket-system) to include overlapping data (for example, if multiple buckets contain data from the same table). If rows with the same table and ID have been synced via multiple buckets, it may be present multiple times in `ps_oplog`, but only one will be preserved in the `ps_data__` table (the one with the highest `op_id`).
@@ -68,7 +136,7 @@ The Client SDK processes the upload queue by invoking an `uploadData()` function
The reason why we designed PowerSync this way is that it allows you to apply your own backend business logic, validations and authorization to any mutations going to your source database.
-The PowerSync Client SDK automatically takes care of network failures and retries. If processing the upload queue fails (e.g. because the user is offline), it is automatically retried.
+The PowerSync Client SDK automatically takes care of network failures and retries. If processing mutations in the upload queue fails (e.g. because the user is offline), it is automatically retried.
diff --git a/architecture/powersync-protocol.mdx b/architecture/powersync-protocol.mdx
index 8408fe73..d25a2af8 100644
--- a/architecture/powersync-protocol.mdx
+++ b/architecture/powersync-protocol.mdx
@@ -1,5 +1,6 @@
---
title: "PowerSync Protocol"
+description: Overview of the sync protocol used between PowerSync clients and the PowerSync Service for efficient delta syncing.
---
This contains a broad overview of the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service).
@@ -25,7 +26,7 @@ All synced data is grouped into [buckets](/architecture/powersync-service#bucket
Each bucket keeps an ordered list of changes to rows within the bucket (operation history) — generally as `PUT` or `REMOVE` operations.
* `PUT` is the equivalent of `INSERT OR REPLACE`
-* `REMOVE` is slightly different from `DELETE`: a row is only deleted from the client if it has been removed from _all_ buckets synced to the client.
+* `REMOVE` is slightly different from `DELETE`: a row is only deleted from the client if it has been removed from _all_ buckets synced to the client.
As a practical example of how buckets manifest themselves, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be obtained from the JWT). Now let's say users with IDs `A` and `B` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs `user_todo_lists["A"]` and `user_todo_lists["B"]`.
diff --git a/architecture/powersync-service.mdx b/architecture/powersync-service.mdx
index 5a89e146..16ebd621 100644
--- a/architecture/powersync-service.mdx
+++ b/architecture/powersync-service.mdx
@@ -1,28 +1,44 @@
---
title: "PowerSync Service"
+description: Understand the PowerSync Service architecture, including the bucket system, data replication, and real-time streaming sync.
---
-When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your _Sync Rules_ or _Sync Streams_ configuration.
+When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
## Bucket System
The concept of _buckets_ is core to PowerSync and its scalability.
-_Buckets_ are basically partitions of data that allows the PowerSync Service to efficiently query the correct data that a specific client needs to sync.
+_Buckets_ are basically partitions of data that allow the PowerSync Service to efficiently query the correct data that a specific client needs to sync.
-When you define [Sync Rules](/sync/rules/overview), you define the different buckets that exist, and you define which [parameters](/sync/rules/parameter-queries) are used for each bucket.
+
+
+ With [Sync Streams](/sync/streams/overview), buckets are created **implicitly** based on your stream definitions, their queries, and subqueries. You don't need to understand or manage buckets directly — the PowerSync Service handles this automatically.
-**Sync Streams: Implicit Buckets**: In our new [Sync Streams](/sync/streams) system which is in [early alpha](/sync/overview), buckets and parameters are not explicitly defined, and are instead implicit based on the streams, their queries and subqueries.
+ For example, if you define a stream like:
+ ```yaml
+ streams:
+ user_lists:
+ auto_subscribe: true
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+ ```
+
+ PowerSync automatically creates the appropriate buckets internally based on the query parameters.
+
+
+ With legacy [Sync Rules](/sync/rules/overview), you explicitly define the buckets using `bucket_definitions` and specify which [parameters](/sync/rules/overview#parameters) are used for each bucket.
+
+
-For example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be embedded in the JWT) to scope those to-do lists.
+### How Buckets Work
-Now let's say users with IDs `1`, `2` and `3` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs of `user_todo_lists["1"]`, `user_todo_lists["2"]` and `user_todo_lists["3"]`.
+To understand how buckets enable efficient syncing, consider this example: Let's say you have data scoped to users — the to-do lists for each user. Based on the data that exists in your source database, PowerSync will create individual buckets for each user. If users with IDs `1`, `2`, and `3` exist in your source database, PowerSync will create buckets with IDs `user_todo_lists["1"]`, `user_todo_lists["2"]`, and `user_todo_lists["3"]`.
-If a user with `user_id=1` in its JWT connects to the PowerSync Service and syncs data, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`.
+When a user with `user_id=1` in their JWT connects to the PowerSync Service, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`.
-As you can see above, a bucket's definition name and set of parameter values together form its _bucket ID_, for example `user_todo_lists["1"]`. If a bucket makes use of multiple parameters, they are comma-separated in the bucket ID, for example `user_todos["user1","admin"]`
+With legacy Sync Rules, a bucket ID is formed from the bucket definition name and its parameter values, for example `user_todo_lists["1"]`. With Sync Streams, the bucket IDs are generated automatically based on your stream queries — you don't need to define and name buckets explicitly.
@@ -39,21 +55,21 @@ This also means that the PowerSync Service has to keep track of less state per-u
Each bucket stores the _recent history_ of operations on each row, not just the latest state of the row.
-This is another core part of the PowerSync architecure — the PowerSync Service can efficiently query the _operations_ that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync.
+This is another core part of the PowerSync architecture — the PowerSync Service can efficiently query the _operations_ that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync.
-When a change occurs in the source database that affects a certain bucket (based on the Sync Rules or Sync Streams configuration), that change will be appended to the operation history in that bucket. Buckets are therefore treated as "append-only" data structures. That being said, to avoid an ever-growing operation history, the buckets can be [compacted](/maintenance-ops/compacting-buckets) (this is automatically done on PowerSync Cloud).
+When a change occurs in the source database that affects a certain bucket (based on your Sync Streams, or legacy Sync Rules), that change will be appended to the operation history in that bucket. Buckets are therefore treated as "append-only" data structures. That being said, to avoid an ever-growing operation history, the buckets can be [compacted](/maintenance-ops/compacting-buckets) (this is automatically done on PowerSync Cloud).
## Bucket Storage
-The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres are currently supported. We refer to this as the _bucket storage_ database and it is separate from the connection to your _source database_ (Postgres, MongoDB, MySQL or SQL Server). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the _bucket storage_ database.
+The PowerSync Service persists the bucket state in durable storage: there is a pluggable storage layer for bucket data, and MongoDB and Postgres are currently supported as _bucket storage_ databases. The _bucket storage_ database is separate from the connection to your _source database_ (Postgres, MongoDB, MySQL or SQL Server). Our cloud-hosting offering (PowerSync Cloud) uses MongoDB Atlas as the _bucket storage_ database.
Persisting the bucket state in a database is also part of how PowerSync achieves high scalability: it means that the PowerSync Service can have a low memory footprint even as you scale to very large volumes of synced data and users/clients.
## Replication From the Source Database
-As mentioned above, one of the primary purposes of the PowerSync Service is replicating data from the source database, based on the Sync Rules or Sync Streams configuration:
+As mentioned above, one of the primary purposes of the PowerSync Service is replicating data from the source database, based on your Sync Streams (or legacy Sync Rules):
@@ -61,13 +77,13 @@ As mentioned above, one of the primary purposes of the PowerSync Service is repl
When the PowerSync Service replicates data from the source database, it:
-1. Pre-processes the data according to the [Sync Rules](/sync/rules/overview) or [Sync Streams](/sync/streams/overview), splitting data into _buckets_ (as explained above) and transforming the data if required.
+1. Pre-processes the data according to your [Sync Streams](/sync/streams/overview) (or [Sync Rules](/sync/rules/overview)), splitting data into _buckets_ (as explained above) and transforming the data if required.
2. Persists each operation into the relevant buckets, ready to be streamed to clients.
### Initial Replication vs. Incremental Replication
-Whenever a new version of Sync Rules or Sync Streams are deployed, initial replication takes place by means of taking a snapshot of all tables/collections referenced in the Sync Rules / Streams.
+Whenever a new version of Sync Streams (or legacy Sync Rules) is deployed, initial replication takes place by means of taking a snapshot of all tables/collections they reference.
After that, data is incrementally replicated using a change data capture stream (the specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture).
@@ -78,7 +94,7 @@ As mentioned above, the other primary purpose of the PowerSync Service is stream
The PowerSync Service authenticates clients/users using [JWTs](/configuration/auth/overview). Once a client/user is authenticated:
-1. The PowerSync Service calculates a list of buckets for the user to sync using [Parameter Queries](/sync/rules/parameter-queries).
+1. The PowerSync Service calculates a list of buckets for the user to sync based on their Sync Stream subscriptions (or [Parameter Queries](/sync/rules/parameter-queries) in legacy Sync Rules).
2. The Service streams any operations added to those buckets since the last time the client/user connected.
The Service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes.
@@ -92,6 +108,5 @@ For more details on exactly how streaming sync works, see [PowerSync Protocol](/
The repo for the PowerSync Service can be found here:
-
-
+
diff --git a/client-sdks/advanced/custom-types-arrays-and-json.mdx b/client-sdks/advanced/custom-types-arrays-and-json.mdx
index 650b03cf..6bcbb1d1 100644
--- a/client-sdks/advanced/custom-types-arrays-and-json.mdx
+++ b/client-sdks/advanced/custom-types-arrays-and-json.mdx
@@ -7,7 +7,7 @@ PowerSync supports JSON/JSONB and array columns. They are synced as JSON text an
## JSON and JSONB
-The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in Sync Rules.
+The PowerSync Service treats JSON and JSONB columns as text and provides many helpers for working with JSON in [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
**Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.
@@ -20,18 +20,38 @@ ALTER TABLE todos
ADD COLUMN custom_payload json;
```
-### Sync Rules
+### Sync Streams
-PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as `json_extract()`.
+
+
+ PowerSync treats JSON columns as text. Use `json_extract()` and other JSON functions in stream queries. Subscribe per list to sync only that list's todos:
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ my_json_todos:
+ auto_subscribe: true
+ with:
+ owned_lists: SELECT id AS list_id FROM lists WHERE owner_id = auth.user_id()
+ query: SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') IN owned_lists
+ ```
-```yaml
-bucket_definitions:
- my_json_todos:
- # Separate bucket per To-Do list
- parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id()
- data:
- - SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') = bucket.list_id
-```
+ The client subscribes once per list (e.g. `db.syncStream('my_json_todos', { list_id: listId }).subscribe()`).
+
+
+ PowerSync treats JSON columns as text and provides transformation functions in Sync Rules such as `json_extract()`.
+
+ ```yaml
+ bucket_definitions:
+ my_json_todos:
+ # Separate bucket per To-Do list
+ parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id()
+ data:
+ - SELECT * FROM todos WHERE json_extract(custom_payload, '$.json_list') = bucket.list_id
+ ```
+
+
### Client SDK
@@ -178,7 +198,7 @@ You can write the entire updated column value as a string, or, with `trackPrevio
PowerSync treats array columns as JSON text. This means that the SQLite JSON operators can be used on any array columns.
-Additionally, some helper methods such as array membership are available in Sync Rules.
+Additionally, array membership is supported in [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) so you can sync rows based on whether a parameter value appears in an array column.
**Note:** Native Postgres arrays, JSON arrays, and JSONB arrays are effectively all equivalent in PowerSync.
@@ -188,11 +208,10 @@ Array columns are defined in Postgres using the following syntax:
```sql
ALTER TABLE todos
-
ADD COLUMN unique_identifiers text[];
```
-### Sync Rules
+### Sync Streams
Array columns are converted to text by the PowerSync Service. A text array as defined above would be synced to clients as the following string:
@@ -200,19 +219,36 @@ Array columns are converted to text by the PowerSync Service. A text array as de
**Array Membership**
-It's possible to sync rows dynamically based on the contents of array columns using the `IN` operator. For example:
+
+
+ Sync rows where a subscription parameter value is in the row's array column using `IN`:
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ custom_todos:
+ query: SELECT * FROM todos WHERE subscription.parameter('list_id') IN unique_identifiers
+ ```
-```yaml
-bucket_definitions:
- custom_todos:
- # Separate bucket per To-Do list
- parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id()
- data:
- - SELECT * FROM todos WHERE bucket.list_id IN unique_identifiers
-```
+ The client subscribes per list (e.g. `db.syncStream('custom_todos', { list_id: listId }).subscribe()`).
+
+
+ It's possible to sync rows dynamically based on the contents of array columns using the `IN` operator:
+
+ ```yaml
+ bucket_definitions:
+ custom_todos:
+ # Separate bucket per To-Do list
+ parameters: SELECT id AS list_id FROM lists WHERE owner_id = request.user_id()
+ data:
+ - SELECT * FROM todos WHERE bucket.list_id IN unique_identifiers
+ ```
+
+
-See these additional details when using the `IN` operator: [Operators](/sync/rules/supported-sql#operators)
+See these additional details when using the `IN` operator: [Operators](/sync/supported-sql#operators)
### Client SDK
@@ -363,7 +399,7 @@ You can write the entire updated column value as a string, or, with `trackPrevio
## Custom Types
-PowerSync serializes custom types as text. For details, see [types in sync rules](/sync/types).
+PowerSync respects Postgres custom types: DOMAIN types sync as their inner type, custom type columns as JSON objects, arrays of custom types as JSON arrays, and ranges (and multi-ranges) as structured JSON. This behavior is the default for Sync Streams. For configuration and legacy behavior, see [Compatibility](/sync/advanced/compatibility#custom-postgres-types). For type handling in queries, see [Types](/sync/types).
### Postgres
@@ -378,18 +414,33 @@ create type location_address AS (
);
```
-### Sync Rules
+### Sync Streams
-Custom type columns are converted to text by the PowerSync Service.
-Depending on whether the `custom_postgres_types` [compatibility option](/sync/advanced/compatibility) is enabled,
-PowerSync would sync the row as:
+
+
+ The custom type column is serialized as JSON and you can use `json_extract()` and other JSON functions in stream queries:
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ todos_by_city:
+ query: SELECT * FROM todos WHERE json_extract(location, '$.city') = subscription.parameter('city')
+ ```
+
+
+ Custom type columns are converted to text by the PowerSync Service.
+ Depending on whether the `custom_postgres_types` [compatibility option](/sync/advanced/compatibility) is enabled,
+ PowerSync would sync the row as:
-- `{"street":"1000 S Colorado Blvd.","city":"Denver","state":"CO","zip":80211}` if the option is enabled.
-- `("1000 S Colorado Blvd.",Denver,CO,80211)` if the option is disabled.
+ - `{"street":"1000 S Colorado Blvd.","city":"Denver","state":"CO","zip":80211}` if the option is enabled.
+ - `("1000 S Colorado Blvd.",Denver,CO,80211)` if the option is disabled.
-You can use regular string and JSON manipulation functions in Sync Rules. This means that individual values of the type
-can be synced with `json_extract` if the `custom_postgres_types` compatibility option is enabled.
-Without the option, the entire column must be synced as text.
+ You can use regular string and JSON manipulation functions in Sync Rules. This means that individual values of the type
+ can be synced with `json_extract` if the `custom_postgres_types` compatibility option is enabled.
+ Without the option, the entire column must be synced as text.
+
+
### Client SDK
diff --git a/client-sdks/advanced/gis-data-postgis.mdx b/client-sdks/advanced/gis-data-postgis.mdx
index e39852f5..1c318919 100644
--- a/client-sdks/advanced/gis-data-postgis.mdx
+++ b/client-sdks/advanced/gis-data-postgis.mdx
@@ -109,13 +109,13 @@ The data looks exactly how it’s stored in the Postgres database i.e.
3. **PostGIS**: The `geography` type is transformed into an encoded form of the value.
1. If you insert coordinates as `st_point(39.742043, -104.991531)` then it is shown as `0101000020E6100000E59CD843FBDE4340E9818FC18AC052C0`
-## Sync Rules
+## Sync Streams
### PostGIS
Example use case: Extract x (long) and y (lat) values from a PostGIS type, to use these values independently in an application.
-Currently, PowerSync supports the following functions that can be used when selecting data in your Sync Rules: [Operators and Functions](/sync/rules/supported-sql#functions)
+PowerSync supports the following PostGIS functions in Sync Streams (or legacy Sync Rules): [Operators and Functions](/sync/supported-sql#functions)
1. `ST_AsGeoJSON`
2. `ST_AsText`
@@ -126,11 +126,25 @@ Currently, PowerSync supports the following functions that can be used when sele
IMPORTANT NOTE: These functions will only work if your Postgres instance has the PostGIS extension installed and you’re storing values as type `geography` or `geometry`.
-```yaml
-# sync-rules.yaml
-bucket_definitions:
- global:
- data:
- - SELECT * FROM lists
- - SELECT *, st_x(location) as longitude, st_y(location) as latitude from todos
-```
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ global:
+ queries:
+ - SELECT * FROM lists
+ - SELECT *, st_x(location) as longitude, st_y(location) as latitude FROM todos
+ ```
+
+
+ ```yaml
+ bucket_definitions:
+ global:
+ data:
+ - SELECT * FROM lists
+ - SELECT *, st_x(location) as longitude, st_y(location) as latitude from todos
+ ```
+
+
diff --git a/client-sdks/advanced/pre-seeded-sqlite.mdx b/client-sdks/advanced/pre-seeded-sqlite.mdx
index a36fa8f5..3c0280ce 100644
--- a/client-sdks/advanced/pre-seeded-sqlite.mdx
+++ b/client-sdks/advanced/pre-seeded-sqlite.mdx
@@ -20,20 +20,36 @@ If you're interested in seeing an end-to-end example, we've prepared a demo repo
# Main Concepts
## Generate a scoped JWT token
-In most cases you'd want to pre-seed the SQLite database with user specific data and not all data from the source database, as you normally would when using PowerSync. For this you would need to generate a JWT tokens that include the necessary properties to satisfy the conditions of the parameter queries in your sync rules.
-
-Let's say we have sync rules that look like this:
-```yaml
-sync_rules:
- content: |
- bucket_definitions:
- store_products:
- parameters: SELECT id as store_id FROM stores WHERE id = request.jwt() ->> 'store_id'
- data:
- - SELECT * FROM products WHERE store_id = bucket.store_id
-```
-
-In the example above the `store_id` is part of the JWT payload and is used in a parameter query to filter products by store for a user. Given this we would want to do the following:
+In most cases you'd want to pre-seed the SQLite database with user specific data and not all data from the source database, as you normally would when using PowerSync. For this you would need to generate JWT tokens that include the necessary properties to satisfy the conditions of the queries in your Sync Streams (or legacy Sync Rules).
+
+Let's say we have the following sync config:
+
+
+
+ ```yaml
+ sync_config:
+ content: |
+ config:
+ edition: 3
+ streams:
+ store_products:
+ query: SELECT * FROM products WHERE store_id = auth.parameter('store_id')
+ ```
+
+
+ ```yaml
+ sync_config:
+ content: |
+ bucket_definitions:
+ store_products:
+ parameters: SELECT id as store_id FROM stores WHERE id = request.jwt() ->> 'store_id'
+ data:
+ - SELECT * FROM products WHERE store_id = bucket.store_id
+ ```
+
+
+
+In the example above the `store_id` is part of the JWT payload and is used to filter products by store for a user. Given this we would want to do the following:
1. Query the source database, directly from the Node.js application, for all the store ids you'd want a pre-seeded SQLite database for.
2. Generate a JWT token for each store and include the `store_id` in the payload.
3. In the Node.js application which implements the PowerSync SDK, return the JWT token in the `fetchCredentials()` function.
diff --git a/client-sdks/advanced/raw-tables.mdx b/client-sdks/advanced/raw-tables.mdx
index 38603df1..c38a6460 100644
--- a/client-sdks/advanced/raw-tables.mdx
+++ b/client-sdks/advanced/raw-tables.mdx
@@ -480,7 +480,7 @@ In PowerSync's [JSON-based view system](/architecture/client-architecture#schema
### Adding raw tables as a new table
-When you're adding new tables to your sync rules, clients will start to sync data on those tables - even if the tables aren't mentioned in the client's schema yet. So at the time you're introducing a new raw table to your app, it's possible that PowerSync has already synced some data for that table, which would be stored in `ps_untyped`. When adding regular tables, PowerSync will automatically extract rows from `ps_untyped`. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table:
+When you're adding new tables to your Sync Streams (or legacy Sync Rules), clients will start to sync data on those tables - even if the tables aren't mentioned in the client's schema yet. So at the time you're introducing a new raw table to your app, it's possible that PowerSync has already synced some data for that table, which would be stored in `ps_untyped`. When adding regular tables, PowerSync will automatically extract rows from `ps_untyped`. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table:
```
INSERT INTO my_table (id, my_column, ...)
diff --git a/client-sdks/advanced/sequential-id-mapping.mdx b/client-sdks/advanced/sequential-id-mapping.mdx
index a27fc462..739bae2b 100644
--- a/client-sdks/advanced/sequential-id-mapping.mdx
+++ b/client-sdks/advanced/sequential-id-mapping.mdx
@@ -28,8 +28,8 @@ Before we get started, let's outline the changes we will have to make:
Add two triggers that will map the UUID to the integer ID and vice versa.
-
- Update the Sync Rules to use the new integer ID instead of the UUID column.
+
+ Update your Sync Streams (or legacy Sync Rules) to use the UUID column instead of the integer ID.
@@ -183,30 +183,46 @@ We will create the following two triggers that cover either scenario of updating
We now have triggers in place that will handle the mapping for our updated schema and
-can move on to updating the Sync Rules to use the UUID column instead of the integer ID.
-
-## Update Sync Rules
-
-As sequential IDs can only be created on the backend source database, we need to use UUIDs in the client. This can be done by updating both the `parameters` and `data` queries to use the new `uuid` columns.
-The `parameters` query is updated by removing the `list_id` alias (this is removed to avoid any confusion between the `list_id` column in the `todos` table), and
-the `data` query is updated to use the `uuid` column as the `id` column for the `lists` and `todos` tables. We also explicitly define which columns to select, as `list_id` is no longer required in the client.
-
-```yaml sync_rules.yaml {4, 7-8}
-bucket_definitions:
- user_lists:
- # Separate bucket per todo list
- parameters: select id from lists where owner_id = request.user_id()
- data:
- # Explicitly define all the columns
- - select uuid as id, created_at, name, owner_id from lists where id = bucket.id
- - select uuid as id, created_at, completed_at, description, completed, created_by, list_uuid from todos where list_id = bucket.id
-```
+can move on to updating your Sync Streams (or legacy Sync Rules) to use the UUID column instead of the integer ID.
+
+## Update Sync Streams
+
+As sequential IDs can only be created on the backend source database, we need to use UUIDs in the client. The sync config is updated to use the `uuid` column as the `id` column for the `lists` and `todos` tables, explicitly defining which columns to select so that `list_id` (the integer ID) is no longer exposed to the client.
+
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ user_lists:
+ auto_subscribe: true
+ with:
+ user_lists_param: SELECT id FROM lists WHERE owner_id = auth.user_id()
+ queries:
+ - "SELECT lists.uuid AS id, lists.created_at, lists.name, lists.owner_id FROM lists WHERE lists.id IN user_lists_param"
+ - "SELECT todos.uuid AS id, todos.created_at, todos.completed_at, todos.description, todos.completed, todos.created_by, todos.list_uuid FROM todos WHERE todos.list_id = user_lists_param"
+ ```
+
+
+ ```yaml sync-config.yaml {4, 7-8}
+ bucket_definitions:
+ user_lists:
+ # Separate bucket per todo list
+ parameters: select id from lists where owner_id = request.user_id()
+ data:
+ # Explicitly define all the columns
+ - select uuid as id, created_at, name, owner_id from lists where id = bucket.id
+ - select uuid as id, created_at, completed_at, description, completed, created_by, list_uuid from todos where list_id = bucket.id
+ ```
+
+
-With the Sync Rules updated, we can now move on to updating the client to use UUIDs.
+We can now move on to updating the client to use UUIDs.
## Update Client to Use UUIDs
-With our Sync Rules updated, we no longer have the `list_id` column in the `todos` table.
+With Sync Streams updated, we no longer have the `list_id` column in the `todos` table.
We start by updating `AppSchema.ts` and replacing `list_id` with `list_uuid` in the `todos` table.
```typescript AppSchema.ts {3, 11}
const todos = new Table(
diff --git a/client-sdks/infinite-scrolling.mdx b/client-sdks/infinite-scrolling.mdx
index c64c8a1d..8a78e62d 100644
--- a/client-sdks/infinite-scrolling.mdx
+++ b/client-sdks/infinite-scrolling.mdx
@@ -17,11 +17,13 @@ This means that in many cases, you can sync a sufficient amount of data to let a
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| It works offline and is low-latency (data loads quickly from the local database). We don't need to load data from the backend via the network when the user reaches the bottom of the page/feed/list. | There will be cases where this approach won't work because the total volume of data might become too large for the local database - for example, when there's a wide range of tables that the user needs to be able to infinite scroll. Your app allows the user to apply filters to the displayed data, which results in fewer pages displayed from a large dataset, and therefore limited scrolling. |
-### 2) Control data sync using client parameters
+### 2) Control data sync using subscription or client parameters
-PowerSync supports the use of [client parameters](/sync/rules/client-parameters) which are specified directly by the client (i.e. not only through the [authentication token](/configuration/auth/custom)). The app can dynamically change these parameters on the client-side and they can be accessed in sync rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/sync/rules/parameter-queries) from the JWT).
+**Sync Streams** (recommended): Use [subscription parameters](/sync/streams/parameters#subscription-parameters) to subscribe to specific data on demand. For example, a client can subscribe to a specific "page" of data when the user scrolls to it. This is more flexible than client parameters — each subscription is independent and multiple tabs/views can subscribe with different parameters simultaneously.
-Usage example: To lazy-load/lazy-sync data for infinite scrolling, you could split your data into 'pages' and use a client parameter to specify which pages to sync to a user.
+**Sync Rules** (legacy): PowerSync supports the use of [client parameters](/sync/rules/client-parameters) which are specified directly by the client. The app can dynamically change these parameters on the client-side and they can be accessed in Sync Rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/sync/rules/parameter-queries) from the JWT).
+
+Usage example: To lazy-load/lazy-sync data for infinite scrolling, you could split your data into 'pages' and use a subscription parameter (Sync Streams) or client parameter (Sync Rules) to specify which pages to sync to a user.
| Pros | Cons |
| --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------- |
@@ -37,7 +39,7 @@ In this scenario we can sync a smaller number of rows to the user initially. If
### 4) Client-side triggers a server-side function to flag data to sync
-You could add a flag to certain records in your backend source database which are used by your [Sync Rules](/sync/rules/overview) to determine which records to sync to specific users. Then your app could make an API call which triggers a function that updates the flags on certain records, causing more records to be synced to the user.
+You could add a flag to certain records in your backend source database which are used by your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) to determine which records to sync to specific users. Then your app could make an API call which triggers a function that updates the flags on certain records, causing more records to be synced to the user.
## Questions?
diff --git a/client-sdks/orms/js/overview.mdx b/client-sdks/orms/js/overview.mdx
index 96702849..8097c464 100644
--- a/client-sdks/orms/js/overview.mdx
+++ b/client-sdks/orms/js/overview.mdx
@@ -1,5 +1,5 @@
---
-title: "ORM Overview"
+title: "JavaScript ORMs Overview"
description: "Reference for using ORMs in PowerSync's JavaScript-based SDKs"
sidebarTitle: Overview
---
diff --git a/client-sdks/orms/kotlin/room.mdx b/client-sdks/orms/kotlin/room.mdx
index 962809d4..5b3e9624 100644
--- a/client-sdks/orms/kotlin/room.mdx
+++ b/client-sdks/orms/kotlin/room.mdx
@@ -92,7 +92,7 @@ Here:
- The SQL statements must match the schema created by Room.
- The `RawTable.name` and `PendingStatementParameter.Column` values must match the table and column names of the synced
- table from the PowerSync Service, derived from your sync rules.
+ table from the PowerSync Service, derived from your Sync Rules.
For more details, see [raw tables](/client-sdks/advanced/raw-tables).
diff --git a/client-sdks/reference/capacitor.mdx b/client-sdks/reference/capacitor.mdx
index 1ead0477..6c11c288 100644
--- a/client-sdks/reference/capacitor.mdx
+++ b/client-sdks/reference/capacitor.mdx
@@ -56,7 +56,7 @@ import LocalOnly from '/snippets/local-only-escape.mdx';
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
@@ -66,7 +66,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx';
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -116,7 +116,7 @@ export type ListRecord = Database['lists'];
### 2. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/client-sdks/reference/dotnet.mdx b/client-sdks/reference/dotnet.mdx
index 6b2b1824..d88ec949 100644
--- a/client-sdks/reference/dotnet.mdx
+++ b/client-sdks/reference/dotnet.mdx
@@ -60,7 +60,7 @@ For more details, please refer to the package [README](https://github.com/powers
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
@@ -72,7 +72,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx';
You can use [this example](https://github.com/powersync-ja/powersync-dotnet/blob/main/demos/CommandLine/AppSchema.cs) as a reference when defining your schema.
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
#### Schema definition syntax
@@ -142,7 +142,7 @@ var todos = await db.GetAll("SELECT * FROM todos");
### 2. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/client-sdks/reference/flutter.mdx b/client-sdks/reference/flutter.mdx
index 6cc9e838..9fc8498b 100644
--- a/client-sdks/reference/flutter.mdx
+++ b/client-sdks/reference/flutter.mdx
@@ -49,7 +49,7 @@ Get started quickly by using the self-hosted **Flutter** + **Supabase** template
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
For this reference document, we assume that you have created a Flutter project and have the following directory structure:
@@ -71,11 +71,11 @@ lib/
### 1\. Define the Client-Side Schema
-The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and [Sync Rules](/sync/rules/overview), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
+The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -109,7 +109,7 @@ const schema = Schema(([
### 2\. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
To instantiate `PowerSyncDatabase`, inject the Schema you defined in the previous step and a file path — it's important to only instantiate one instance of `PowerSyncDatabase` per file.
diff --git a/client-sdks/reference/javascript-web.mdx b/client-sdks/reference/javascript-web.mdx
index ef55320c..0a3dd07e 100644
--- a/client-sdks/reference/javascript-web.mdx
+++ b/client-sdks/reference/javascript-web.mdx
@@ -85,7 +85,7 @@ The PowerSync [JavaScript Web SDK](../javascript-web) is compatible with popular
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
@@ -95,7 +95,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx';
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -141,7 +141,7 @@ export type ListRecord = Database['lists'];
### 2. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/client-sdks/reference/kotlin.mdx b/client-sdks/reference/kotlin.mdx
index e37ac7c7..490ece19 100644
--- a/client-sdks/reference/kotlin.mdx
+++ b/client-sdks/reference/kotlin.mdx
@@ -44,15 +44,15 @@ import LocalOnly from '/snippets/local-only-escape.mdx';
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1\. Define the Client-Side Schema
-The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and [Sync Rules](/sync/rules/overview), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
+The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -99,7 +99,7 @@ val AppSchema: Schema = Schema(
### 2\. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/client-sdks/reference/node.mdx b/client-sdks/reference/node.mdx
index 7304c939..0436417b 100644
--- a/client-sdks/reference/node.mdx
+++ b/client-sdks/reference/node.mdx
@@ -51,7 +51,7 @@ import LocalOnly from '/snippets/local-only-escape.mdx';
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
@@ -63,13 +63,13 @@ You can use [this example](https://github.com/powersync-ja/powersync-js/blob/e5a
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
Select JavaScript and replace the suggested import with `@powersync/node`.
### 2. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/client-sdks/reference/react-native-and-expo.mdx b/client-sdks/reference/react-native-and-expo.mdx
index 9d5da1d9..bcd3264a 100644
--- a/client-sdks/reference/react-native-and-expo.mdx
+++ b/client-sdks/reference/react-native-and-expo.mdx
@@ -51,15 +51,15 @@ A separate `powersync-react` package is available containing React hooks for Pow
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1\. Define the Client-Side Schema
-The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and [Sync Rules](/sync/rules/overview), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
+The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -104,7 +104,7 @@ export type ListRecord = Database['lists'];
### 2\. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/client-sdks/reference/rust.mdx b/client-sdks/reference/rust.mdx
index 0f3bfef8..e3b77b84 100644
--- a/client-sdks/reference/rust.mdx
+++ b/client-sdks/reference/rust.mdx
@@ -44,15 +44,15 @@ import LocalOnly from '/snippets/local-only-escape.mdx';
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1\. Define the Client-Side Schema
-The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and [Sync Rules](/sync/rules/overview), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
+The first step is to define the client-side schema, which refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The client-side schema is typically mainly derived from your backend source database schema and your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), but can also include other tables such as local-only tables. Note that schema migrations are not required on the SQLite database due to the schemaless nature of the [PowerSync protocol](/architecture/powersync-protocol): schemaless data is synced to the client-side SQLite database, and the client-side schema is then applied to that data using _SQLite views_ to allow for structured querying of the data. The schema is applied when the local PowerSync database is constructed (as we'll show in the next step).
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -96,7 +96,7 @@ pub fn app_schema() -> Schema {
### 2\. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
#### Process setup
diff --git a/client-sdks/reference/swift.mdx b/client-sdks/reference/swift.mdx
index 8291e9da..030929dc 100644
--- a/client-sdks/reference/swift.mdx
+++ b/client-sdks/reference/swift.mdx
@@ -39,7 +39,7 @@ The PowerSync Swift SDK makes use of the [PowerSync Kotlin SDK](https://github.c
## Getting Started
-**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Rules (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
+**Prerequisites**: To sync data between your client-side app and your backend source database, you must have completed the necessary setup for PowerSync, which includes connecting your source database to the PowerSync Service and deploying Sync Streams (or legacy Sync Rules) (steps 1-4 in the [Setup Guide](/intro/setup-guide)).
### 1. Define the Client-Side Schema
@@ -49,7 +49,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx';
-The types available are `text`, `integer` and `real`. These should map directly to the values produced by the [Sync Rules](/sync/rules/overview). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
+The types available are `text`, `integer` and `real`. These should map directly to the values produced by your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). If a value doesn't match, it is cast automatically. For details on how backend source database types are mapped to the SQLite types, see [Types](/sync/types).
**Example**:
@@ -103,7 +103,7 @@ let AppSchema = Schema(lists, todos)
### 2. Instantiate the PowerSync Database
-Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Rules](/sync/rules/overview). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
+Next, you need to instantiate the PowerSync database. PowerSync streams changes from your backend source database into the client-side SQLite database, based on your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). In your client-side app, you can read from and write to the local SQLite database, whether the user is online or offline.
**Example**:
diff --git a/configuration/app-backend/client-side-integration.mdx b/configuration/app-backend/client-side-integration.mdx
index 4995bd08..fc388431 100644
--- a/configuration/app-backend/client-side-integration.mdx
+++ b/configuration/app-backend/client-side-integration.mdx
@@ -10,7 +10,7 @@ After you've [instantiated](/intro/setup-guide#instantiate-the-powersync-databas
| Purpose | Description |
|---------|-------------|
-| **Uploading mutations to your backend:** | Mutations that are made to the client-side SQLite database are uploaded to your backend application, where you control how they're applied to your backend source database (Postgres, MongoDB, MySQL, or SQL Server). This is how PowerSync achieves bi-directional syncing of data: The [PowerSync Service](/architecture/powersync-service) provides the _server-to-client read path_ based on your [Sync Rules or Streams](/sync/overview), and the _client-to-server write path_ goes via your backend. |
+| **Uploading mutations to your backend:** | Mutations that are made to the client-side SQLite database are uploaded to your backend application, where you control how they're applied to your backend source database (Postgres, MongoDB, MySQL, or SQL Server). This is how PowerSync achieves bi-directional syncing of data: The [PowerSync Service](/architecture/powersync-service) provides the _server-to-client read path_ based on your [Sync Streams or Sync Rules (legacy)](/sync/overview), and the _client-to-server write path_ goes via your backend. |
| **Authentication integration:** (optional) | PowerSync uses JWTs for authentication between the Client SDK and PowerSync Service. Some [authentication providers](/configuration/auth/overview#common-authentication-providers) generate JWTs for users which PowerSync can verify directly. For others, some code must be [added to your application backend](/configuration/auth/custom) to generate the JWTs. |
diff --git a/configuration/auth/custom.mdx b/configuration/auth/custom.mdx
index 96a44411..9125685e 100644
--- a/configuration/auth/custom.mdx
+++ b/configuration/auth/custom.mdx
@@ -35,7 +35,7 @@ Requirements for the signed JWT:
2. Alternatively, specify a custom audience in the instance settings (Cloud) or in your config file ([self-hosted](#self-hosted-configuration)).
1. The JWT must expire in 24 hours or less, and 60 minutes or less is recommended. Specifically, both `iat` and `exp` fields must be present, with a difference of 86,400 or less between them.
2. The user ID must be used as the `sub` of the JWT.
-4. Additional fields can be added which can be referenced in Sync Rules [parameter queries](/sync/rules/parameter-queries) or Sync Streams (as [`auth.parameters()`](/sync/streams/overview#accessing-parameters)).
+4. Additional fields can be added which can be referenced in Sync Streams (as [`auth.parameters()`](/sync/streams/overview#accessing-parameters)) or Sync Rules [parameter queries](/sync/rules/parameter-queries).
## Option 1: Asymmetric JWTs — Using JWKS (Recommended)
diff --git a/configuration/auth/development-tokens.mdx b/configuration/auth/development-tokens.mdx
index d49c4a8f..d9b053dc 100644
--- a/configuration/auth/development-tokens.mdx
+++ b/configuration/auth/development-tokens.mdx
@@ -20,8 +20,8 @@ This can also be used to generate a token for a specific user to debug issues.
3. Check the **Development tokens** setting and save your changes
4. Click the **Connect** button in the top bar
5. Enter a user ID:
- - If you're only using [global](/sync/rules/global-buckets) Sync Rules / Streams, you can enter any value (e.g., `test-user`) since all data syncs to all users
- - If you're using any Sync Rules / Streams that filter data by user, enter a user ID that matches a user in your database. The user ID will be used as `request.user_id()` in your Sync Rules or `auth.user_id()` in Sync Streams.
+ - If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
+ - If your data is filtered by parameters, use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
6. Click **Generate Token** and copy the token
@@ -43,7 +43,7 @@ Development tokens can be used for testing purposes either with the [Sync Diagno
### Using with Sync Diagnostics Client
-The [Sync Diagnostics Client](https://diagnostics-app.powersync.com) allows you to quickly test syncing and inspect a user's SQLite database, to verify that your PowerSync Service configuration and Sync Rules / Streams behave as expected.
+The [Sync Diagnostics Client](https://diagnostics-app.powersync.com) allows you to quickly test syncing and inspect a user's SQLite database, to verify that your PowerSync Service configuration and Sync Streams / Sync Rules behave as expected.
1. Open the [Sync Diagnostics Client](https://diagnostics-app.powersync.com)
2. Enter the generated development token at **PowerSync Token**.
diff --git a/configuration/auth/firebase-auth.mdx b/configuration/auth/firebase-auth.mdx
index 41641997..2a8a5cba 100644
--- a/configuration/auth/firebase-auth.mdx
+++ b/configuration/auth/firebase-auth.mdx
@@ -13,10 +13,10 @@ Firebase signs these tokens using RS256.
PowerSync will periodically refresh the keys using the above JWKS URI, and validate tokens against the configured audience (token `aud` value).
The Firebase user UID will be available as:
+* `auth.user_id()` in [Sync Streams](/sync/streams/overview) (recommended)
* `request.user_id()` in [Sync Rules](/sync/rules/overview) (previously `token_parameters.user_id`)
-* Or, `auth.user_id()` if you are using [Sync Streams](/sync/streams/overview)
-To use a different identifier as the user ID in Sync Rules / Streams (for example, user email), use [Custom Authentication](/configuration/auth/custom).
+To use a different identifier as the user ID in Sync Streams / Sync Rules (for example, user email), use [Custom Authentication](/configuration/auth/custom).
### PowerSync Cloud Configuration
diff --git a/configuration/auth/supabase-auth.mdx b/configuration/auth/supabase-auth.mdx
index 7a43bdda..df9bd09f 100644
--- a/configuration/auth/supabase-auth.mdx
+++ b/configuration/auth/supabase-auth.mdx
@@ -236,11 +236,11 @@ Supabase Auth is enabled, but no Supabase connection string found. Skipping Supa
This means PowerSync couldn't detect your Supabase project from the database connection string. Use [manual JWKS configuration](#manual-jwks-configuration) instead.
-## Sync Rules / Streams
+## Sync Streams
The Supabase user UUID will be available as:
-* `request.user_id()` in [Sync Rules](/sync/rules/overview)
* `auth.user_id()` in [Sync Streams](/sync/streams/overview).
+* `request.user_id()` in [Sync Rules](/sync/rules/overview)
-To use a different identifier as the user ID in Sync Rules / Streams (for example, user email), use [Custom Authentication](/configuration/auth/custom).
+To use a different identifier as the user ID in Sync Streams / Sync Rules (for example, user email), use [Custom Authentication](/configuration/auth/custom).
diff --git a/configuration/powersync-service/cloud-instances.mdx b/configuration/powersync-service/cloud-instances.mdx
index 03adf3ab..42b38781 100644
--- a/configuration/powersync-service/cloud-instances.mdx
+++ b/configuration/powersync-service/cloud-instances.mdx
@@ -16,7 +16,7 @@ After creating an instance, you can configure various settings through the [Powe
- **Database Connections**: Connect your instance to your source database. See [Source Database Connection](/configuration/source-db/connection) for details.
- **Client Auth**: Configure how clients authenticate. See [Authentication Setup](/configuration/auth/overview) for details.
-- **Sync Rules / Sync Streams**: Define what data to sync to clients. See [Sync Rules & Sync Streams Overview](/sync/overview) for details.
+- **Sync Streams / Sync Rules (legacy)**: Define what data to sync to clients. See [Sync Streams & Sync Rules Overview](/sync/overview) for details.
- **Settings**: Advanced instance-specific settings.
For more information about managing instances, see the [PowerSync Dashboard](/tools/powersync-dashboard) documentation.
diff --git a/configuration/powersync-service/self-hosted-instances.mdx b/configuration/powersync-service/self-hosted-instances.mdx
index 2c9fc8b9..b41b14a3 100644
--- a/configuration/powersync-service/self-hosted-instances.mdx
+++ b/configuration/powersync-service/self-hosted-instances.mdx
@@ -54,14 +54,10 @@ storage:
# The port which the PowerSync API server will listen on
port: 80
-# Specify sync rules
-sync_rules:
- content: |
- bucket_definitions:
- global:
- data:
- - SELECT * FROM lists
- - SELECT * FROM todos
+# Specify Sync Streams or legacy Sync Rules (see Sync Streams section below).
+# Referencing a separate file is recommended so you can edit streams/rules without nesting YAML.
+sync_config:
+ path: sync-config.yaml
# Settings for client authentication
client_auth:
@@ -230,26 +226,57 @@ Separate Postgres servers are required for replication connections (i.e. source
| Below 14 | Separate servers are required for the source and bucket storage. Replication will be blocked if the same server is detected. |
| 14 and above | The source database and bucket storage database can be on the same server. Using the same database (with separate schemas) is supported but may lead to higher CPU usage. Using separate servers remains an option. |
-## Sync Rules
+## Sync Streams
-Your project's [Sync Rules](/sync/rules/overview) can either be specified within your configuration file directly, or in a separate file that is referenced:
+Your Sync Streams (or legacy Sync Rules) configuration can be in a separate file (recommended) or inline in the main config. The `sync_config:` key is used for both Sync Streams and Sync Rules.
-```yaml config.yaml
-# Define sync rules:
-sync_rules:
+
+ **Separate file**: Referencing a file with `path:` keeps your main config tidy and makes editing Sync Streams/Sync Rules easier. Ensure the file is available at that path (e.g. in the same directory as your main config or on a mounted volume).
+
+
+
+```yaml Sync Streams — Separate File (Recommended)
+# sync-config.yaml (reference from main config with sync_config: path: sync-config.yaml)
+config:
+ edition: 3
+streams:
+ todos:
+ auto_subscribe: true
+ query: SELECT * FROM todos WHERE owner_id = auth.user_id()
+```
+
+```yaml Sync Streams — Inline
+sync_config:
+ content: |
+ config:
+ edition: 3
+ streams:
+ todos:
+ auto_subscribe: true
+ query: SELECT * FROM todos WHERE owner_id = auth.user_id()
+```
+
+```yaml Sync Rules — Separate File (Legacy)
+# sync-config.yaml (reference from main config with sync_config: path: sync-config.yaml)
+bucket_definitions:
+ global:
+ data:
+ - SELECT * FROM lists
+ - SELECT * FROM todos
+```
+
+```yaml Sync Rules — Inline (Legacy)
+sync_config:
content: |
bucket_definitions:
global:
data:
- SELECT * FROM lists
- SELECT * FROM todos
-
-# Alternatively, reference a sync rules file
-# sync_rules:
- # path: sync_rules.yaml
```
+
-For more information, see [Sync Rules](/sync/rules/overview).
+For more information, see [Sync Streams](/sync/streams/overview) (recommended) or [Sync Rules](/sync/rules/overview) (legacy).
To verify that your Sync Rules are functioning correctly, inspect the contents of your bucket storage database.
diff --git a/configuration/source-db/postgres-maintenance.mdx b/configuration/source-db/postgres-maintenance.mdx
index 2028cd6e..67689f37 100644
--- a/configuration/source-db/postgres-maintenance.mdx
+++ b/configuration/source-db/postgres-maintenance.mdx
@@ -6,7 +6,7 @@ title: "Postgres Maintenance"
Postgres logical replication slots are used to keep track of [replication](/architecture/powersync-service#replication-from-the-source-database) progress (recorded as a [LSN](https://www.postgresql.org/docs/current/datatype-pg-lsn.html)).
-Every time a new version of [Sync Rules or Sync Streams](/sync/overview) are deployed, PowerSync creates a new replication slot, then switches over and deletes the old replication slot when the reprocessing of the new Sync Rules/Streams version is done.
+Every time a new version of [Sync Streams or Sync Rules](/sync/overview) are deployed, PowerSync creates a new replication slot, then switches over and deletes the old replication slot when the reprocessing of the new Sync Streams/Rules version is done.
The replication slots can be viewed using this query:
@@ -35,7 +35,7 @@ Postgres prevents active slots from being dropped. If it does happen (e.g. while
### Maximum Replication Slots
-Postgres is configured with a maximum number of replication slots per server. Since each PowerSync instance uses one replication slot for replication and an additional one while deploying a new Sync Rules/Streams version, the maximum number of PowerSync instances connected to one Postgres server is equal to the maximum number of replication slots, minus 1\.
+Postgres is configured with a maximum number of replication slots per server. Since each PowerSync instance uses one replication slot for replication and an additional one while deploying a new Sync Streams/Rules version, the maximum number of PowerSync instances connected to one Postgres server is equal to the maximum number of replication slots, minus 1\.
If other clients are also using replication slots, this number is reduced further.
diff --git a/configuration/source-db/setup.mdx b/configuration/source-db/setup.mdx
index 4946f785..a8206363 100644
--- a/configuration/source-db/setup.mdx
+++ b/configuration/source-db/setup.mdx
@@ -198,7 +198,7 @@ For other providers and self-hosted databases:
### 1. Ensure logical replication is enabled
-PowerSync reads the Postgres WAL using logical replication in order to create [buckets](/architecture/powersync-service#bucket-system) in accordance with the specified PowerSync [Sync Rules](/sync/rules/overview).
+PowerSync reads the Postgres WAL using logical replication in order to create [buckets](/architecture/powersync-service#bucket-system) in accordance with your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview).
If you are managing Postgres yourself, set `wal_level = logical` in your config file:
@@ -291,8 +291,8 @@ This requires the `changeStreamPreAndPostImages` option to be enabled on replica
PowerSync supports three configuration options for post-images:
1. **Off**: (`post_images: off`): Uses `fullDocument: 'updateLookup'` for backwards compatibility. This was the default for older instances. However, this may lead to consistency issues, so we strongly recommend enabling post-images instead.
-2. **Auto-Configure**: (`post_images: auto_configure`) The **default** for new instances: Automatically enables the `changeStreamPreAndPostImages` option on collections as needed. Requires the permissions/privileges mentioned above. If a collection is removed from [Sync Rules](/sync/rules/overview), you need to manually disable `changeStreamPreAndPostImages` on that collection.
-3. **Read-only**: (`post_images: read_only`): Uses `fullDocument: 'required'` and requires `changeStreamPreAndPostImages: { enabled: true }` to be set on every collection referenced in the [Sync Rules](/sync/rules/overview). Replication will error if this is not configured. This option is ideal when permissions are restricted.
+2. **Auto-Configure**: (`post_images: auto_configure`) The **default** for new instances: Automatically enables the `changeStreamPreAndPostImages` option on collections as needed. Requires the permissions/privileges mentioned above. If a collection is removed from [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview), you need to manually disable `changeStreamPreAndPostImages` on that collection.
+3. **Read-only**: (`post_images: read_only`): Uses `fullDocument: 'required'` and requires `changeStreamPreAndPostImages: { enabled: true }` to be set on every collection referenced in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview). Replication will error if this is not configured. This option is ideal when permissions are restricted.
To manually configure collections for `read_only` mode, run this command on each collection:
@@ -381,7 +381,7 @@ GRANT SELECT ON .* TO 'repl_user'@'%';
FLUSH PRIVILEGES;
```
-It is possible to constrain the MySQL user further and limit access to specific tables. Care should be taken to ensure that all the tables in the Sync Rules are included in the grants.
+It is possible to constrain the MySQL user further and limit access to specific tables. Care should be taken to ensure that all the tables in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) are included in the grants.
```sql
-- Grant select to the users and the invoices tables in the source database
diff --git a/debugging/error-codes.mdx b/debugging/error-codes.mdx
index 072fbcc3..533b308c 100644
--- a/debugging/error-codes.mdx
+++ b/debugging/error-codes.mdx
@@ -124,12 +124,24 @@ This reference documents PowerSync error codes organized by component, with trou
Publication uses publish_via_partition_root.
- **PSYNC_S1144**:
- Invalid Postgres server configuration for replication and bucket storage.
-
- The same Postgres server, running an unsupported version of Postgres, has been configured for both replication and bucket storage.
+ Invalid Postgres server configuration for replication and sync bucket storage.
+
+ The same Postgres server, running an unsupported version of Postgres, has been configured for both replication and sync bucket storage.
Using the same Postgres server is only supported on Postgres 14 and above.
This error typically indicates that the Postgres version is below 14.
- Either upgrade the Postgres server to version 14 or above, or use a different Postgres server for bucket storage.
+ Either upgrade the Postgres server to version 14 or above, or use a different Postgres server for sync bucket storage.
+
+- **PSYNC_S1145**:
+ Table has RLS enabled, but the replication role does not have the BYPASSRLS attribute.
+
+ We recommend using a dedicated replication role with the BYPASSRLS attribute for replication:
+
+ ```sql
+ ALTER ROLE powersync_role BYPASSRLS
+ ```
+
+ An alternative is to create explicit policies for the replication role. If you have done that,
+ you may ignore this warning.
## PSYNC_S12xx: MySQL replication issues
@@ -204,14 +216,25 @@ This reference documents PowerSync error codes organized by component, with trou
- **PSYNC_S1346**:
Failed to read MongoDB Change Stream.
-
+
See the error cause for more details.
+- **PSYNC_S1347**:
+ Timeout while getting a resume token for an initial snapshot.
+
+ This may happen if there is very high load on the source database.
+
## PSYNC_S14xx: MongoDB storage replication issues
- **PSYNC_S1402**:
Max transaction tries exceeded.
+- **PSYNC_S1500**:
+ Required updates in the Change Data Capture (CDC) are no longer available.
+
+ Possible causes:
+ - Older data has been cleaned up due to exceeding the retention period.
+
## PSYNC_S2xxx: Service API
- **PSYNC_S2001**:
@@ -309,6 +332,16 @@ This does not include auth configuration errors on the service.
- **PSYNC_S2401**:
Could not get clusterTime.
+- **PSYNC_S2402**:
+ Failed to connect to the MongoDB storage database.
+
+- **PSYNC_S2403**:
+ Query timed out. Could be due to a large query or a temporary load issue on the storage database.
+ Retry the request.
+
+- **PSYNC_S2404**:
+ Query failure on the storage database. See error details for more information.
+
## PSYNC_S23xx: Sync API errors - Postgres Storage
## PSYNC_S3xxx: Service configuration issues
diff --git a/docs.json b/docs.json
index 470561fd..ce83a758 100644
--- a/docs.json
+++ b/docs.json
@@ -163,29 +163,36 @@
]
},
{
- "group": "Sync Rules & Streams",
+ "group": "Sync Streams & Rules",
"icon": "arrows-rotate",
"pages": [
"sync/overview",
{
- "group": "Sync Rules (GA)",
+ "group": "Sync Streams (Beta)",
+ "pages": [
+ "sync/streams/overview",
+ "sync/streams/parameters",
+ "sync/streams/queries",
+ "sync/streams/ctes",
+ "sync/streams/examples",
+ "sync/streams/client-usage",
+ "sync/streams/migration"
+ ]
+ },
+ {
+ "group": "Sync Rules (Legacy)",
"pages": [
"sync/rules/overview",
"sync/rules/organize-data-into-buckets",
"sync/rules/global-buckets",
"sync/rules/parameter-queries",
"sync/rules/data-queries",
- "sync/rules/supported-sql",
+ "sync/rules/many-to-many-join-tables",
"sync/rules/client-parameters"
]
},
- {
- "group": "Sync Streams (Early Alpha)",
- "pages": [
- "sync/streams/overview"
- ]
- },
"sync/types",
+ "sync/supported-sql",
{
"group": "Advanced",
"pages": [
@@ -194,7 +201,6 @@
"sync/advanced/client-id",
"sync/advanced/case-sensitivity",
"sync/advanced/compatibility",
- "sync/advanced/many-to-many-and-join-tables",
"sync/advanced/sync-data-by-time",
"sync/advanced/schemas-and-connections",
"sync/advanced/multiple-client-versions",
@@ -440,7 +446,7 @@
"pages": [
"integrations/supabase/guide",
"integrations/supabase/realtime-streaming",
- "integrations/supabase/rls-and-sync-rules",
+ "integrations/supabase/rls-and-sync-streams",
"integrations/supabase/local-development",
"integrations/supabase/connector-performance"
]
@@ -739,7 +745,11 @@
},
{
"source": "/usage/sync-rules/operators-and-functions",
- "destination": "/sync/rules/supported-sql"
+ "destination": "/sync/supported-sql"
+ },
+ {
+ "source": "/sync/rules/supported-sql",
+ "destination": "/sync/supported-sql"
},
{
"source": "/usage/sync-rules/advanced-topics",
@@ -763,7 +773,11 @@
},
{
"source": "/usage/sync-rules/guide-many-to-many-and-join-tables",
- "destination": "/sync/advanced/many-to-many-and-join-tables"
+ "destination": "/sync/rules/many-to-many-join-tables"
+ },
+ {
+ "source": "/sync/advanced/many-to-many-and-join-tables",
+ "destination": "/sync/rules/many-to-many-join-tables"
},
{
"source": "/usage/sync-rules/guide-sync-data-by-time",
@@ -1256,7 +1270,11 @@
},
{
"source": "/integration-guides/supabase-+-powersync/rls-and-sync-rules",
- "destination": "/integrations/supabase/rls-and-sync-rules"
+ "destination": "/integrations/supabase/rls-and-sync-streams"
+ },
+ {
+ "source": "/integrations/supabase/rls-and-sync-rules",
+ "destination": "/integrations/supabase/rls-and-sync-streams"
},
{
"source": "/integration-guides/supabase-+-powersync/local-development",
diff --git a/handling-writes/custom-conflict-resolution.mdx b/handling-writes/custom-conflict-resolution.mdx
index febc0c06..06dab49b 100644
--- a/handling-writes/custom-conflict-resolution.mdx
+++ b/handling-writes/custom-conflict-resolution.mdx
@@ -36,7 +36,7 @@ When data changes on the server:
1. **Source database updates** - Direct writes or changes from other clients
2. **PowerSync Service detects changes** - Through replication stream
-3. **Clients download updates** - Based on their sync rules
+3. **Clients download updates** - Based on their Sync Streams (or legacy Sync Rules)
4. **Local SQLite updates** - Changes merge into the client's database
**Conflicts arise when**: Multiple clients modify the same row (or fields) before syncing, or when a client's changes conflict with server-side rules.
@@ -507,17 +507,32 @@ CREATE TABLE write_conflicts (
### Step 2: Sync Conflicts to Clients
-**Sync Rules configuration:**
-
-```yaml
-bucket_definitions:
- user_data:
- parameters:
- - SELECT request.user_id() as user_id
- data:
- - SELECT * FROM tasks WHERE user_id = bucket.user_id
- - SELECT * FROM write_conflicts WHERE user_id = bucket.user_id AND resolved = FALSE
-```
+**Sync Streams / Sync Rules:**
+
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ user_data:
+ queries:
+ - SELECT * FROM tasks WHERE user_id = auth.user_id()
+ - SELECT * FROM write_conflicts WHERE user_id = auth.user_id() AND NOT resolved
+ ```
+
+
+ ```yaml
+ bucket_definitions:
+ user_data:
+ parameters:
+ - SELECT request.user_id() as user_id
+ data:
+ - SELECT * FROM tasks WHERE user_id = bucket.user_id
+ - SELECT * FROM write_conflicts WHERE user_id = bucket.user_id AND resolved = FALSE
+ ```
+
+
### Step 3: Record Conflicts in Backend
@@ -850,7 +865,7 @@ For scenarios where you just need to record changes without tracking their statu
How it works:
- Mark the table as `insertOnly: true` in your client schema
-- Don't include the `field_changes` table in your sync rules
+- Don't include the `field_changes` table in your Sync Rules
- Changes are uploaded to the server but never downloaded back to clients
**Client schema:**
@@ -880,7 +895,7 @@ For scenarios where you want to show sync status temporarily but don't need a pe
How it works:
- Use a normal table on the client (not `insertOnly`)
-- Don't include the `field_changes` table in your sync rules
+- Don't include the `field_changes` table in your Sync Rules
- Pending changes stay on the client until they're uploaded and the server processes them
- Once the server processes a change and PowerSync syncs the next checkpoint, the change automatically disappears from the client
@@ -919,7 +934,7 @@ function SyncIndicator({ taskId }: { taskId: string }) {
**When to use:** Showing "syncing..." indicators, temporary status tracking without long-term storage overhead, cases where you want automatic cleanup after sync.
-**Tradeoff:** Can't show detailed server-side error messages (unless the server writes to a separate errors table that *is* in sync rules). No long-term history on the client.
+**Tradeoff:** Can't show detailed server-side error messages (unless the server writes to a separate errors table that *is* in Sync Rules). No long-term history on the client.
## Strategy 7: Cumulative Operations (Inventory)
diff --git a/handling-writes/custom-write-checkpoints.mdx b/handling-writes/custom-write-checkpoints.mdx
index a2a2d460..66ec7d64 100644
--- a/handling-writes/custom-write-checkpoints.mdx
+++ b/handling-writes/custom-write-checkpoints.mdx
@@ -89,7 +89,7 @@ create publication powersync for table public.lists, public.todos, public.checkp
### Sync Rules Requirements
-You need to enable the `write_checkpoints` sync event in your Sync Rules configuration. This event should map the rows from the `checkpoints` table to the `CheckpointPayload` payload.
+You need to enable the `write_checkpoints` sync event in your Sync Rules. This event should map the rows from the `checkpoints` table to the `CheckpointPayload` payload.
```YAML
# sync-rules.yaml
diff --git a/images/architecture/powersync-docs-diagram-client-architecture-001.png b/images/architecture/powersync-docs-diagram-client-architecture-001.png
old mode 100755
new mode 100644
index 8e1eaf80..7cad73b5
Binary files a/images/architecture/powersync-docs-diagram-client-architecture-001.png and b/images/architecture/powersync-docs-diagram-client-architecture-001.png differ
diff --git a/images/architecture/powersync-docs-diagram-client-architecture-002.png b/images/architecture/powersync-docs-diagram-client-architecture-002.png
old mode 100755
new mode 100644
index 698366da..ce77d377
Binary files a/images/architecture/powersync-docs-diagram-client-architecture-002.png and b/images/architecture/powersync-docs-diagram-client-architecture-002.png differ
diff --git a/images/architecture/powersync-docs-diagram-client-architecture-003.png b/images/architecture/powersync-docs-diagram-client-architecture-003.png
old mode 100755
new mode 100644
index 88531da6..118f7f88
Binary files a/images/architecture/powersync-docs-diagram-client-architecture-003.png and b/images/architecture/powersync-docs-diagram-client-architecture-003.png differ
diff --git a/images/integration-guides/neon/powersync-docs-diagram-neon-integration.png b/images/integration-guides/neon/powersync-docs-diagram-neon-integration.png
index b90d474d..82a0f7e3 100644
Binary files a/images/integration-guides/neon/powersync-docs-diagram-neon-integration.png and b/images/integration-guides/neon/powersync-docs-diagram-neon-integration.png differ
diff --git a/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png b/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png
old mode 100755
new mode 100644
index 299222d7..fbb73f89
Binary files a/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png and b/images/integration-guides/supabase/powersync-docs-diagram-supabase-integration.png differ
diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-001.png b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-001.png
deleted file mode 100755
index 06ebcd23..00000000
Binary files a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-001.png and /dev/null differ
diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-002.png b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-002.png
deleted file mode 100755
index d287dbb1..00000000
Binary files a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-002.png and /dev/null differ
diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-003.png b/images/usage/sync-rules/powersync-docs-diagram-sync-rules-003.png
deleted file mode 100755
index 2bf322c2..00000000
Binary files a/images/usage/sync-rules/powersync-docs-diagram-sync-rules-003.png and /dev/null differ
diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-streams-002.png b/images/usage/sync-rules/powersync-docs-diagram-sync-streams-002.png
new file mode 100644
index 00000000..4afeb242
Binary files /dev/null and b/images/usage/sync-rules/powersync-docs-diagram-sync-streams-002.png differ
diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-streams-003.png b/images/usage/sync-rules/powersync-docs-diagram-sync-streams-003.png
new file mode 100644
index 00000000..e9a88429
Binary files /dev/null and b/images/usage/sync-rules/powersync-docs-diagram-sync-streams-003.png differ
diff --git a/images/usage/sync-rules/powersync-docs-diagram-sync-stremas-001.png b/images/usage/sync-rules/powersync-docs-diagram-sync-stremas-001.png
new file mode 100644
index 00000000..2b4ede01
Binary files /dev/null and b/images/usage/sync-rules/powersync-docs-diagram-sync-stremas-001.png differ
diff --git a/integrations/flutterflow/guide.mdx b/integrations/flutterflow/guide.mdx
index 0e2df767..02ec226e 100644
--- a/integrations/flutterflow/guide.mdx
+++ b/integrations/flutterflow/guide.mdx
@@ -102,7 +102,7 @@ This guide walks you through building a basic item management app from scratch a
- For additional information on PowerSync's Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
- - If you're wondering how Sync Rules relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-rules).
+ - If you're wondering how Sync Rules relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-streams).
## Initialize Your FlutterFlow Project
diff --git a/integrations/flutterflow/legacy-guide.mdx b/integrations/flutterflow/legacy-guide.mdx
index 3a2fc2f8..413d6084 100644
--- a/integrations/flutterflow/legacy-guide.mdx
+++ b/integrations/flutterflow/legacy-guide.mdx
@@ -98,7 +98,7 @@ bucket_definitions:
For additional information on PowerSync's Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
-If you're wondering how Sync Rules relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-rules).
+If you're wondering how Sync Rules relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-streams).
## Initialize Your FlutterFlow Project
diff --git a/integrations/neon.mdx b/integrations/neon.mdx
index 1a46d4a5..1c930979 100644
--- a/integrations/neon.mdx
+++ b/integrations/neon.mdx
@@ -34,7 +34,7 @@ Upon successful integration of Neon + PowerSync, your system architecture will l
-The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Neon Postgres database (based on configured Sync Rules as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Neon Data API when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database.
+The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Neon Postgres database (based on your Sync Streams as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Neon Data API when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database.
For more details on PowerSync's general architecture, [see here](/architecture/architecture-overview).
@@ -53,7 +53,7 @@ We will follow these steps to get an offline-first 'Notes' demo app up and runni
* Create connection to Neon
* Configure authentication
- * Configure Sync Rules
+ * Configure Sync Streams
Test the configuration using our provided PowerSync-Neon 'Notes' demo app.
@@ -135,38 +135,67 @@ PowerSync uses logical replication to sync data from your Neon database.
### Connect PowerSync to Your Neon Database
-### Configure Sync Rules
-
-[Sync Rules](/sync/rules/overview) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own notes (plus any shared notes).
-
-1. In the PowerSync Dashboard, select your project and instance and go to the **Sync Rules** view.
-
-2. Edit the Sync Rules in the editor and replace the contents with the below:
-
-```yaml
-config:
- edition: 2
-
-bucket_definitions:
- by_user:
- # Only sync rows belonging to the user
- parameters: SELECT id as note_id FROM notes WHERE owner_id = request.user_id()
- data:
- - SELECT * FROM notes WHERE id = bucket.note_id
- - SELECT * FROM paragraphs WHERE note_id = bucket.note_id
- # Sync all shared notes to all users (not recommended for production)
- shared_notes:
- parameters: SELECT id as note_id from notes where shared = TRUE
- data:
- - SELECT * FROM notes WHERE id = bucket.note_id
- - SELECT * FROM paragraphs WHERE note_id = bucket.note_id
-```
-
-3. Click **"Validate"** and ensure there are no errors. This validates your Sync Rules against your Postgres database.
-4. Click **"Deploy"** to deploy your Sync Rules.
+### Configure Sync Streams
+
+[Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own notes (plus any shared notes).
+
+1. In the PowerSync Dashboard, select your project and instance and go to the **Sync Streams** view (shown as **Sync Rules** if using legacy Sync Rules).
+
+2. Edit the sync config in the editor and replace the contents with the below:
+
+
+
+ ```yaml
+ config:
+ edition: 3
+
+ streams:
+ user_notes:
+ auto_subscribe: true
+ # Sync notes and paragraphs belonging to the authenticated user
+ queries:
+ - SELECT * FROM notes WHERE owner_id = auth.user_id()
+ - SELECT paragraphs.* FROM paragraphs
+ INNER JOIN notes ON notes.id = paragraphs.note_id
+ WHERE notes.owner_id = auth.user_id()
+ shared_notes:
+ auto_subscribe: true
+ # Sync all shared notes to all users (not recommended for production)
+ queries:
+ - SELECT * FROM notes WHERE shared = TRUE
+ - SELECT paragraphs.* FROM paragraphs
+ INNER JOIN notes ON notes.id = paragraphs.note_id
+ WHERE notes.shared = TRUE
+ ```
+
+
+ ```yaml
+ config:
+ edition: 2
+
+ bucket_definitions:
+ by_user:
+ # Only sync rows belonging to the user
+ parameters: SELECT id as note_id FROM notes WHERE owner_id = request.user_id()
+ data:
+ - SELECT * FROM notes WHERE id = bucket.note_id
+ - SELECT * FROM paragraphs WHERE note_id = bucket.note_id
+ # Sync all shared notes to all users (not recommended for production)
+ shared_notes:
+ parameters: SELECT id as note_id from notes where shared = TRUE
+ data:
+ - SELECT * FROM notes WHERE id = bucket.note_id
+ - SELECT * FROM paragraphs WHERE note_id = bucket.note_id
+ ```
+
+
+
+3. Click **"Validate"** and ensure there are no errors. This validates your sync config against your Postgres database.
+4. Click **"Deploy"** to deploy your sync config.
-For additional information on PowerSync's Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
+- For additional information on PowerSync's Sync Streams, refer to the [Sync Streams](/sync/streams/overview) documentation.
+- For legacy Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
## Test Everything (Using Our Demo App)
@@ -214,7 +243,7 @@ During development, you can use the **Sync Test** feature in the PowerSync Dashb
1. Click on **"Sync Test"** in the PowerSync Dashboard.
2. Enter the UUID of a user in your Neon Auth database to generate a test JWT.
-3. Click **"Launch Sync Diagnostics Client"** to test the sync rules.
+3. Click **"Launch Sync Diagnostics Client"** to test the Sync Rules.
For more information, explore the [PowerSync docs](/) or join us on [our community Discord](https://discord.gg/powersync) where our team is always available to answer questions.
diff --git a/integrations/serverpod.mdx b/integrations/serverpod.mdx
index 0bf1d41b..b711c44a 100644
--- a/integrations/serverpod.mdx
+++ b/integrations/serverpod.mdx
@@ -234,15 +234,15 @@ storage:
# The port which the PowerSync API server will listen on
port: 8080
-sync_rules:
+sync_config:
content: |
+ config:
+ edition: 3
streams:
todos:
# For each user, sync all greeting they own.
- query: SELECT * FROM greeting WHERE owner = request.user_id()
auto_subscribe: true # Sync by default
- config:
- edition: 2
+ query: SELECT * FROM greeting WHERE owner = auth.user_id()
client_auth:
audience: [powersync]
diff --git a/integrations/supabase/guide.mdx b/integrations/supabase/guide.mdx
index 869572ae..e37f20fa 100644
--- a/integrations/supabase/guide.mdx
+++ b/integrations/supabase/guide.mdx
@@ -37,7 +37,7 @@ Upon successful integration of Supabase + PowerSync, your system architecture wi
-The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Supabase Postgres database (based on configured Sync Rules as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Supabase client library when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database.
+The local SQLite database embedded in the PowerSync SDK is automatically kept in sync with the Supabase Postgres database (based on your Sync Streams as you will see later in this guide). Client-side data modifications are persisted in the local SQLite database as well as stored in an upload queue that gets processed via the Supabase client library when network connectivity is available. Therefore reads and writes can happen in the app regardless of whether the user is online or offline, by using the local SQLite database.
For more details on PowerSync's general architecture, [see here](/architecture/architecture-overview).
@@ -54,7 +54,7 @@ We will follow these steps to get an offline-first 'To-Do List' demo app up and
* Create connection to Supabase
- * Configure Sync Rules
+ * Configure Sync Streams
Test the configuration using our provided PowerSync-Supabase 'To-Do List' demo app with your framework of choice.
@@ -122,30 +122,48 @@ Run the below SQL statement in your **Supabase SQL Editor** to create a Postgres
### Connect PowerSync to Your Supabase
-### Configure Sync Rules
-
-[Sync Rules](/sync/rules/overview) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own to-do lists and list items.
-
-1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Sync Rules** view.
-
-2. Edit the Sync Rules in the editor and replace the contents with the below:
-
-```yaml
-bucket_definitions:
- user_lists:
- # Separate bucket per To-Do list
- parameters: select id as list_id from lists where owner_id = request.user_id()
- data:
- - select * from lists where id = bucket.list_id
- - select * from todos where list_id = bucket.list_id
-```
-
-2. Click **"Validate"** and ensure there are no errors. This validates your Sync Rules against your Postgres database.
-3. Click **"Deploy"** to deploy your Sync Rules.
+### Configure Sync Streams
+
+[Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) allow developers to control which data gets synced to which user devices using a SQL-like syntax in a YAML file. For the demo app, we're going to specify that each user can only see their own to-do lists and list items.
+
+1. In the [PowerSync Dashboard](https://dashboard.powersync.com/), select your project and instance and go to the **Sync Streams** view (shown as **Sync Rules** if using legacy Sync Rules).
+
+2. Edit the Sync Streams in the editor and replace the contents with the below:
+
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ user_data:
+ auto_subscribe: true
+ queries:
+ - SELECT * FROM lists WHERE owner_id = auth.user_id()
+ - SELECT todos.* FROM todos INNER JOIN lists ON todos.list_id = lists.id WHERE lists.owner_id = auth.user_id()
+
+ ```
+
+
+ ```yaml
+ bucket_definitions:
+ user_lists:
+ # Separate bucket per To-Do list
+ parameters: select id as list_id from lists where owner_id = request.user_id()
+ data:
+ - select * from lists where id = bucket.list_id
+ - select * from todos where list_id = bucket.list_id
+ ```
+
+
+
+2. Click **"Validate"** and ensure there are no errors. This validates your sync config against your Postgres database.
+3. Click **"Deploy"** to deploy your sync config.
-- For additional information on PowerSync's Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
-- If you're wondering how Sync Rules relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-rules).
+- For additional information on PowerSync's Sync Streams, refer to the [Sync Streams](/sync/streams/overview) documentation.
+- For legacy Sync Rules, refer to the [Sync Rules](/sync/rules/overview) documentation.
+- If you're wondering how Sync Streams relate to Supabase Postgres [RLS](https://supabase.com/docs/guides/auth/row-level-security), see [this subsection](/integrations/supabase/rls-and-sync-streams).
## Test Everything (Using Our Demo App)
diff --git a/integrations/supabase/rls-and-sync-rules.mdx b/integrations/supabase/rls-and-sync-streams.mdx
similarity index 51%
rename from integrations/supabase/rls-and-sync-rules.mdx
rename to integrations/supabase/rls-and-sync-streams.mdx
index 4514f9ce..138b7eee 100644
--- a/integrations/supabase/rls-and-sync-rules.mdx
+++ b/integrations/supabase/rls-and-sync-streams.mdx
@@ -1,12 +1,12 @@
---
-title: "RLS and Sync Rules"
+title: "RLS and Sync Streams"
---
-PowerSync's [Sync Rules](/sync/rules/overview) and Supabase's support for [Row Level Security (RLS)](https://supabase.com/docs/guides/auth/row-level-security) can be used in conjunction. Here are some high level similarities and differences:
+PowerSync's [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) and Supabase's support for [Row Level Security (RLS)](https://supabase.com/docs/guides/auth/row-level-security) can be used in conjunction. Here are some high level similarities and differences:
* RLS should be used as the authoritative set of security rules applied to your users' CRUD operations that reach Postgres.
-* Sync Rules are only applied for data that is to be downloaded to clients — they do not apply to uploaded data.
- * Sync Rules can typically be considered to be complementary to RLS, and will generally mirror your RLS setup.
+* Sync Streams (or legacy Sync Rules) are only applied for data that is to be downloaded to clients — they do not apply to uploaded data.
+ * Sync Streams / Sync Rules can typically be considered to be complementary to RLS, and will generally mirror your RLS setup.
Supabase tables are often created with auto-increment IDs. For easiest use of PowerSync, make sure to convert them to text IDs as detailed [**here**](/sync/advanced/client-id)**.**
@@ -35,8 +35,10 @@ create policy "todos in owned lists" on public.todos for ALL using (
```
-`auth.uid()` in a Supabase RLS policy is the same as `request.user_id()` (previously `token_parameters.user_id`) in [Sync Rules](/sync/rules/overview).
+`auth.uid()` in a Supabase RLS policy maps to:
+- `auth.user_id()` in [Sync Streams](/sync/streams/overview)
+- `request.user_id()` (previously `token_parameters.user_id`) in legacy [Sync Rules](/sync/rules/overview)
-If you compare these to your Sync Rules configuration in `sync-rules.yaml`, you'll see they are quite similar.
+If you compare these to your sync config, you'll see the access patterns are quite similar.
If you have any questions, join us on [our community Discord](https://discord.gg/powersync) where our team is always available to help.
diff --git a/intro/examples.mdx b/intro/examples.mdx
index a98a1000..a2b9dd28 100644
--- a/intro/examples.mdx
+++ b/intro/examples.mdx
@@ -1,6 +1,7 @@
---
title: "Demo Apps & Example Projects"
sidebarTitle: "Examples"
+description: Explore demo apps and example projects to see PowerSync in action across different platforms and backends.
---
The best way to understand how PowerSync works is to explore it hands-on. Browse our collection of demo apps and example projects to see PowerSync in action, experiment with different features, or use as a reference for your own app.
diff --git a/intro/powersync-philosophy.mdx b/intro/powersync-philosophy.mdx
index 88b08e13..045b1560 100644
--- a/intro/powersync-philosophy.mdx
+++ b/intro/powersync-philosophy.mdx
@@ -26,7 +26,7 @@ Once you have a local SQLite database that is always in sync, [state management]
#### Flexibility
-PowerSync allows you to fully customize what data is synced to the client. Syncing the entire database is extremely simple, but it is just as easy to use our [Sync Rules](/sync/rules/overview) to transform and filter data for each client (dynamic partial replication).
+PowerSync allows you to fully customize what data is synced to the client. Syncing the entire database is extremely simple, but it is just as easy to use [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) to transform and filter data for each client (partial sync).
Writing back to the backend source database [is in full control of the developer](/handling-writes/writing-client-changes) — use your own authentication, validation, and constraints.
@@ -38,9 +38,9 @@ Our goal is also to be stack-agnostic: whether you are switching from MySQL to P
#### Simplicity
-You use plain Postgres, MongoDB, MySQL, or SQL Server on the server — no extensions, and no significant change in your schema required \[[2](#footnotes)\]. PowerSync [uses](/configuration/source-db/setup) Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture (CDC) to replicate changes to the [PowerSync Service](/architecture/powersync-service), where data is transformed and partitioned according to [Sync Rules](/sync/rules/overview), and persisted in a way that allows efficiently streaming incremental changes to each client.
+You use plain Postgres, MongoDB, MySQL, or SQL Server on the server — no extensions, and no significant change in your schema required \[[2](#footnotes)\]. PowerSync [uses](/configuration/source-db/setup) Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture (CDC) to replicate changes to the [PowerSync Service](/architecture/powersync-service), where data is transformed and partitioned according to [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)), and persisted in a way that allows efficiently streaming incremental changes to each client.
-PowerSync has been used in apps with hundreds of tables. There are no complex migrations to run: You define your [Sync Rules](/sync/rules/overview) and [client-side schema](/intro/setup-guide#define-your-client-side-schema), and the data is automatically kept in sync. If you [change Sync Rules](/maintenance-ops/implementing-schema-changes), the entire new set of data is applied atomically on the client. When you do need to make schema changes on the server while still supporting older clients, we [have the processes in place](/maintenance-ops/implementing-schema-changes) to do that without hassle.
+PowerSync has been used in apps with hundreds of tables. There are no complex migrations to run: You define your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) and [client-side schema](/intro/setup-guide#define-your-client-side-schema), and the data is automatically kept in sync. If you [change Sync Streams/Rules](/maintenance-ops/implementing-schema-changes), the relevant new set of data is applied atomically on the client. When you do need to make schema changes on the server while still supporting older clients, we [have the processes in place](/maintenance-ops/implementing-schema-changes) to do that without hassle.
No need for CRDTs \[[3](#footnotes)\]. PowerSync is a server-client sync platform: since no peer-to-peer syncing is involved, CRDTs can be overkill. Instead, we use a server reconciliation architecture with a default approach of "last write wins", with capability to [customize the conflict resolution if required](/handling-writes/handling-update-conflicts) — the developer is in [full control of the write process](/handling-writes/writing-client-changes). Our [strong consistency guarantees](/architecture/consistency) give you peace of mind about the integrity of data on the client.
diff --git a/intro/setup-guide.mdx b/intro/setup-guide.mdx
index 2d380bd8..d9e821b1 100644
--- a/intro/setup-guide.mdx
+++ b/intro/setup-guide.mdx
@@ -187,7 +187,7 @@ PowerSync is available as a cloud-hosted service (PowerSync Cloud) or can be sel
Self-hosted PowerSync runs via Docker.
- Below is a minimal example of setting up the PowerSync Service with Postgres as the bucket storage database and example Sync Rules. MongoDB is also supported as a bucket storage database (docs are linked at the end of this step), and you will learn more about Sync Rules in a next step.
+ Below is a minimal example of setting up the PowerSync Service with Postgres as the [bucket storage](/architecture/powersync-service#bucket-storage) database and example Sync Streams. MongoDB is also supported as a bucket storage database (docs are linked at the end of this step), and you will learn more about Sync Streams in a later step.
```bash
# 1. Create a directory for your config
@@ -230,25 +230,28 @@ PowerSync is available as a cloud-hosted service (PowerSync Cloud) or can be sel
uri: postgresql://powersync_role:myhighlyrandompassword@powersync-postgres:5432/postgres
sslmode: disable # Only for local/private networks
- # Connection settings for bucket storage (Postgres and MongoDB are supported)
+ # Bucket storage connection (Postgres and MongoDB are supported)
storage:
type: postgresql
uri: postgresql://powersync_storage_user:my_secure_user_password@powersync-postgres-storage:5432/powersync_storage
sslmode: disable # Use 'disable' only for local/private networks
- # Sync Rules (defined in a later step)
- sync_rules:
+ # Sync Streams (explained in a later step)
+ sync_config:
content: |
- bucket_definitions:
- global:
- data:
+ config:
+ edition: 3
+ streams:
+ shared_data:
+ auto_subscribe: true
+ queries:
- SELECT * FROM lists
- SELECT * FROM todos
```
**Note**: This example assumes you've configured your source database with the required user and publication (see the previous step)
- and are running it via Docker in the 'powersync-network' network.
+ and are running it via Docker in the `powersync-network` network.
If you are not using Docker, you will need to specify the connection details in the `config.yaml` file manually (see next step for more details).
@@ -300,41 +303,41 @@ The next step is to connect your PowerSync Service instance to your source datab
- ```yaml Postgres
- replication:
- connections:
- - type: postgresql # or mongodb, mysql, mssql
- uri: postgresql://powersync_role:myhighlyrandompassword@powersync-postgres:5432/postgres # The connection URI or individual parameters can be specified.
- sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
- # Note: 'disable' is only suitable for local/private networks, not for public networks
+ ```yaml Postgres
+ replication:
+ connections:
+ - type: postgresql # or mongodb, mysql, mssql
+ uri: postgresql://powersync_role:myhighlyrandompassword@powersync-postgres:5432/postgres # The connection URI or individual parameters can be specified.
+ sslmode: disable # 'verify-full' (default) or 'verify-ca' or 'disable'
+ # Note: 'disable' is only suitable for local/private networks, not for public networks
+ ```
+
+ ```yaml MongoDB
+ replication:
+ connections:
+ - type: mongodb
+ uri: mongodb+srv://user:password@cluster.mongodb.net/database
+ post_images: auto_configure
```
- ```yaml MongoDB
- replication:
- connections:
- - type: mongodb
- uri: mongodb+srv://user:password@cluster.mongodb.net/database
- post_images: auto_configure
- ```
-
- ```yaml MySQL
- replication:
- connections:
- - type: mysql
- uri: mysql://repl_user:password@host:3306/database
- ```
-
- ```yaml SQL Server
- replication:
- connections:
- - type: mssql
- uri: mssql://user:password@$host:1433/database
- schema: dbo
- additionalConfig:
- trustServerCertificate: true
- pollingIntervalMs: 1000
- pollingBatchSize: 20
- ```
+ ```yaml MySQL
+ replication:
+ connections:
+ - type: mysql
+ uri: mysql://repl_user:password@host:3306/database
+ ```
+
+ ```yaml SQL Server
+ replication:
+ connections:
+ - type: mssql
+ uri: mssql://user:password@$host:1433/database
+ schema: dbo
+ additionalConfig:
+ trustServerCertificate: true
+ pollingIntervalMs: 1000
+ pollingBatchSize: 20
+ ```
@@ -346,94 +349,148 @@ The next step is to connect your PowerSync Service instance to your source datab
-# 4. Define Basic Sync Rules
+# 4. Define Sync Streams
+
+PowerSync uses either **Sync Streams** (or legacy **Sync Rules**) to control which data gets synced to which users/devices. Both use SQL-like queries defined in YAML format.
+
+
+
-Sync Rules control which data gets synced to which users/devices. They consist of SQL-like queries organized into "buckets" (groupings of data). Each PowerSync Service instance has a Sync Rules definition in YAML format.
+Sync Streams are now in beta and production-ready. We recommend Sync Streams for new projects — they offer a simpler syntax and support on-demand syncing for web apps.
-We recommend starting with a simple **global bucket** that syncs data to all users. This is the simplest way to get started.
+Start with simple **auto-subscribed streams** that sync data to all users by default:
- ```yaml Postgres Example
- bucket_definitions:
- global:
- data:
- - SELECT * FROM todos
- - SELECT * FROM lists WHERE archived = false
- ```
+```yaml Postgres Example
+config:
+ edition: 3
+streams:
+ shared_data:
+ auto_subscribe: true
+ queries:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE NOT archived
+```
+
+```yaml MongoDB Example
+config:
+ edition: 3
+streams:
+ shared_data:
+ auto_subscribe: true
+ # MongoDB uses "_id" but PowerSync uses "id" on the client
+ queries:
+ - SELECT _id as id, * FROM lists
+ - SELECT _id as id, * FROM todos WHERE archived = false
+```
+
+```yaml MySQL Example
+config:
+ edition: 3
+streams:
+ shared_data:
+ auto_subscribe: true
+ queries:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE NOT archived
+```
+
+```yaml SQL Server Example
+config:
+ edition: 3
+streams:
+ shared_data:
+ auto_subscribe: true
+ queries:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE NOT archived
+```
+
- ```yaml MongoDB Example
- bucket_definitions:
- global:
- data:
- # Note that MongoDB uses “_id” as the name of the ID field in collections whereas
- # PowerSync uses “id” in its client-side database. This is why the below syntax
- # should always be used in the data queries when pairing PowerSync with MongoDB.
- - SELECT _id as id, * FROM lists
- - SELECT _id as id, * FROM todos WHERE archived = false
- ```
+**Learn more:** [Sync Streams documentation](/sync/streams/overview)
- ```yaml MySQL Example
- bucket_definitions:
- global:
- data:
- - SELECT * FROM todos
- - SELECT * FROM lists WHERE archived = 0
- ```
+
- ```yaml SQL Server Example
- bucket_definitions:
- global:
- data:
- - SELECT * FROM todos
- - SELECT * FROM lists WHERE archived = 0
- ```
-
+
+Sync Rules is the original system for controlling data sync. Use this if you prefer a fully released (non-beta) solution.
-### Deploy Sync Rules
+
+```yaml Postgres Example
+bucket_definitions:
+ global:
+ data:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE archived = false
+```
+
+```yaml MongoDB Example
+bucket_definitions:
+ global:
+ data:
+ # MongoDB uses "_id" but PowerSync uses "id" on the client
+ - SELECT _id as id, * FROM lists
+ - SELECT _id as id, * FROM todos WHERE archived = false
+```
+
+```yaml MySQL Example
+bucket_definitions:
+ global:
+ data:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE archived = 0
+```
+
+```yaml SQL Server Example
+bucket_definitions:
+ global:
+ data:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE archived = 0
+```
+
-
-
- In the [PowerSync Dashboard](https://dashboard.powersync.com/):
+**Learn more:** [Sync Rules documentation](/sync/rules/overview)
- 1. Select your project and instance
- 2. Go to the **Sync Rules** view
- 3. Edit the YAML directly in the dashboard
- 4. Click **Deploy** to validate and deploy your Sync Rules
-
+
+
-
- Add to your `config.yaml`:
+### Deploy Your Configuration
- ```yaml
- sync_rules:
- content: |
- bucket_definitions:
- global:
- data:
- - SELECT * FROM todos
- - SELECT * FROM lists WHERE archived = false
- ```
-
-
+
+
+In the [PowerSync Dashboard](https://dashboard.powersync.com/):
+
+1. Select your project and instance
+2. Go to the **Sync Streams** or **Sync Rules** view (depending on which you're using)
+3. Edit the YAML directly in the dashboard
+4. Click **Deploy** to validate and deploy
+
+
+
+Add a `sync_config` section to your `config.yaml`. Using a separate file (recommended) keeps the main config tidy:
+
+**Recommended — reference a separate file:**
+```yaml config.yaml
+sync_config:
+ path: sync-config.yaml
+```
+
+Put your streams or rules in `sync-config.yaml` (see [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances#sync-streams--sync-rules) for full examples). Alternatively, you can use inline `content: |` with the YAML nested under `sync_config`.
+
+
- **Note**: Table/collection names within your Sync Rules must match the table names defined in your client-side schema (defined in a later step below).
+Table/collection names in your configuration must match the table names defined in your client-side schema (defined in a later step below).
-
- **Learn More**
-
- For more details on Sync Rules usage, see the [Sync Rules documentation](/sync/rules/overview).
-
-
# 5. Generate a Development Token
For quick development and testing, you can generate a temporary development token instead of implementing full authentication.
You'll use this token for two purposes:
-- **Testing with the _Sync Diagnostics Client_** (in the next step) to verify your setup and Sync Rules
+- **Testing with the _Sync Diagnostics Client_** (in the next step) to verify your setup and Sync Streams (or legacy Sync Rules)
- **Connecting your app** (in a later step) to test the client SDK integration
@@ -442,7 +499,7 @@ You'll use this token for two purposes:
2. Go to the **Client Auth** view
3. Check the **Development tokens** setting and save your changes
4. Click the **Connect** button in the top bar
- 5. **Enter token subject**: Since you're starting with just a simple global bucket in your Sync Rules that syncs all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with).
+ 5. **Enter token subject**: Since you're starting with simple streams or buckets that sync all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with).
6. Click **Generate token** and copy the token
@@ -471,16 +528,16 @@ Use the development token you generated in the [previous step](#5-generate-a-dev
1. Go to [https://diagnostics-app.powersync.com](https://diagnostics-app.powersync.com)
- 2. Enter your development token (from the [Generate a Development Token](#5-generate-a-development-token) step above)
- 3. Enter your PowerSync instance URL (found in [PowerSync Dashboard](https://dashboard.powersync.com/) - click **Connect** in the top bar)
- 4. Click **Connect**
+ 2. Enter your development token at **PowerSync Token** (from the [Generate a Development Token](#5-generate-a-development-token) step above)
+ 3. Enter your PowerSync instance URL at **PowerSync Endpoint** (found in the [PowerSync Dashboard](https://dashboard.powersync.com/) - click **Connect** in the top bar)
+ 4. Click **Proceed**
1. Go to [https://diagnostics-app.powersync.com](https://diagnostics-app.powersync.com)
- 2. Enter your development token (from the [Generate a Development Token](#5-generate-a-development-token) step above)
- 3. Enter your PowerSync Service endpoint (the URL where your self-hosted service is running, e.g. `http://localhost:8080` if running locally)
- 4. Click **Connect**
+ 2. Enter your development token at **PowerSync Token** (from the [Generate a Development Token](#5-generate-a-development-token) step above)
+ 3. Enter your PowerSync Service endpoint at **PowerSync Endpoint** (the URL where your self-hosted service is running, e.g. `http://localhost:8080` if running locally)
+ 4. Click **Proceed**
The Sync Diagnostics Client can also be run as a local standalone web app — see the [README](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app#readme) for instructions.
@@ -494,12 +551,12 @@ The Sync Diagnostics Client will connect to your PowerSync Service instance and
**Checkpoint:**
- Inspect your global bucket and synced tables in the Sync Diagnostics Client — these should match the Sync Rules you [defined previously](#4-define-basic-sync-rules). This confirms your setup is working correctly before integrating the client SDK into your app.
+ Inspect your synced tables in the Sync Diagnostics Client — these should match the Sync Streams (or legacy Sync Rules) you [defined previously](#4-define-sync-streams-or-sync-rules). This confirms your setup is working correctly before integrating the client SDK into your app.
# 7. Use the Client SDK
-Now it's time to integrate PowerSync into your app. This involves installing the SDK, defining your client-side schema, instantiating the database, connecting to your PowerSync Service instance, and reading/writing data.
+Now it's time to integrate PowerSync into your app. This involves installing the Client SDK, defining your client-side schema, instantiating the database, connecting to your PowerSync Service instance, and reading/writing data.
### Install the Client SDK
@@ -549,7 +606,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx';
-_PowerSync Cloud:_ The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Rules in your preferred language.
+_PowerSync Cloud:_ The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Streams (or legacy Sync Rules) in your preferred language.
Here's an example schema for a simple `todos` table:
@@ -564,12 +621,12 @@ import SdkSchemaExamples from '/snippets/sdk-schema-examples.mdx';
**Learn More**
- The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Rules and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types).
+ The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Streams (or legacy Sync Rules) and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types).
### Instantiate the PowerSync Database
-Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Rules configuration.
+Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Streams (or legacy Sync Rules).
import SdkInstantiateDbExamples from '/snippets/sdk-instantiate-db-examples.mdx';
@@ -1180,10 +1237,10 @@ For production deployments, you'll need to:
### Additional Resources
-- Learn more about [Sync Rules](/sync/rules/overview) for advanced data filtering
-- Explore [Live Queries / Watch Queries](/client-sdks/watch-queries) for reactive UI updates
-- Check out [Example Projects](/intro/examples) for complete implementations
-- Review the [Client SDK References](/client-sdks/overview) for client-side platform-specific details
+- Learn more about [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) for controlling partial syncing.
+- Explore [Live Queries / Watch Queries](/client-sdks/watch-queries) for reactive UI updates.
+- Check out [Example Projects](/intro/examples) for complete implementations.
+- Review the [Client SDK References](/client-sdks/overview) for client-side platform-specific details.
# Questions?
diff --git a/maintenance-ops/compacting-buckets.mdx b/maintenance-ops/compacting-buckets.mdx
index 348bfab4..620d0305 100644
--- a/maintenance-ops/compacting-buckets.mdx
+++ b/maintenance-ops/compacting-buckets.mdx
@@ -174,11 +174,11 @@ Key considerations:
2. **Scope**: Defragmenting all rows at once is more efficient but causes a larger sync cycle
3. **Monitoring**: Use the [Sync Diagnostics Client](https://github.com/powersync-ja/powersync-js/tree/main/tools/diagnostics-app) to track operations-to-rows ratio
-## Sync Rule deployments
+## Sync Streams Deployments
-Whenever modifications to [Sync Rules](/sync/rules/overview) are deployed, all buckets are re-created from scratch. This has a similar effect to fully defragmenting and compacting all buckets. This was recommended as a workaround before explicit compacting became available ([released July 26, 2024](https://releases.powersync.com/announcements/bucket-compacting)).
+Whenever modifications to [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) are deployed, all buckets are re-created from scratch. This has a similar effect to fully defragmenting and compacting all buckets. This was recommended as a workaround before explicit compacting became available ([released July 26, 2024](https://releases.powersync.com/announcements/bucket-compacting)).
-In the future, we may use [incremental sync rule reprocessing](https://roadmap.powersync.com/c/85-more-efficient-sync-reprocessing) to process changed bucket definitions only.
+Soon, we will use [incremental sync rule reprocessing](https://github.com/orgs/powersync-ja/discussions/349) to process changed definitions only.
## Technical details
diff --git a/maintenance-ops/deploying-schema-changes.mdx b/maintenance-ops/deploying-schema-changes.mdx
index 268d8038..76b03bf2 100644
--- a/maintenance-ops/deploying-schema-changes.mdx
+++ b/maintenance-ops/deploying-schema-changes.mdx
@@ -3,7 +3,7 @@ title: "Deploying Schema Changes"
sidebarTitle: "Deploying Schema Changes"
---
-The deploy process for schema or [Sync Rule](/sync/rules/overview) updates depends on the type of change.
+The deploy process for schema or [Sync Streams](/sync/streams/overview) / [Sync Rules](/sync/rules/overview) updates depends on the type of change.
See the appropriate subsections below for details on the various scenarios.
@@ -13,8 +13,8 @@ See the appropriate subsections below for details on the various scenarios.
Example: Add a new table that a new version of the app depends on, or add a new column to an existing table.
1. Apply source schema changes (i.e. in Postgres database) (often as a pre-deploy step as part of 2)
2. Deploy backend application changes
- 3. Deploy [Sync Rule](/sync/rules/overview) changes
- 4. Wait for Sync Rule reprocessing to complete
+ 3. Deploy [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) changes
+ 4. Wait for reprocessing to complete
5. Publish the app (may be deployed with delayed publishing at any prior point)
diff --git a/maintenance-ops/implementing-schema-changes.mdx b/maintenance-ops/implementing-schema-changes.mdx
index 36c8910d..114baa3a 100644
--- a/maintenance-ops/implementing-schema-changes.mdx
+++ b/maintenance-ops/implementing-schema-changes.mdx
@@ -6,15 +6,15 @@ title: "Implementing Schema Changes"
The [PowerSync protocol](/architecture/powersync-protocol) is schemaless, and not directly affected by schema changes.
-Replicating data from the source database to [buckets](/sync/rules/overview) may be affected by server-side changes to the schema (in the case of Postgres), and may need [reprocessing](/maintenance-ops/compacting-buckets) in some cases.
+Replicating data from the source database to [buckets](/architecture/powersync-service#bucket-system) may be affected by server-side changes to the schema (in the case of Postgres), and may need [reprocessing](/maintenance-ops/compacting-buckets) in some cases.
The [client-side schema](/intro/setup-guide#define-your-client-side-schema) is just a view on top of the schemaless data. Updating this client-side schema is immediate when the new version of the app runs, with no client-side migrations required.
The developer is responsible for keeping client-side schema changes backwards-compatible with older versions of client apps. PowerSync has some functionality to assist with this:
-1. [Different Sync Rules](/sync/advanced/multiple-client-versions) can be applied based on [parameters](/sync/rules/client-parameters) such as client version.
+1. [Different stream queries](/sync/advanced/multiple-client-versions) can be applied based on [connection parameters](/sync/streams/parameters#connection-parameters) such as client version. (In Sync Rules, this uses [client parameters](/sync/rules/client-parameters).)
-2. Sync Rules can apply simple [data transformations](/sync/rules/data-queries) to keep data in a format compatible with older clients.
+2. Stream queries can apply simple data transformations to keep data in a format compatible with older clients, for example by aliasing or casting columns. (In Sync Rules, this is done via [data query expressions](/sync/rules/data-queries).)
## Client-Side Impact of Schema and Sync Rule Changes
@@ -42,7 +42,7 @@ The schema as supplied on the client is only a view on top of the schemaless dat
Nothing in PowerSync will fail hard if there are incompatible schema changes. But depending on how the app uses the data, app logic may break. For example, removing a table/collection that the app actively uses may break workflows in the app.
-To avoid certain types of breaking changes on older clients, Sync Rule [transformations](/sync/rules/data-queries) may be used.
+To avoid certain types of breaking changes on older clients, data transformations may be used — via column aliasing/casting in [Sync Streams](/sync/streams/queries#selecting-columns), or [data query expressions](/sync/rules/data-queries) in Sync Rules.
## Postgres Specifics
@@ -58,11 +58,11 @@ However, this does not include DDL (Data Definition Language), which includes:
4. Changing the type of a column.
-### Postgres schema changes affecting Sync Rules
+### Postgres schema changes affecting Sync Streams
#### DROP table
-Dropping a table is not directly detected by PowerSync, and previous data may be preserved. To make sure the data is removed, `TRUNCATE` the table before dropping, or remove the table from [Sync Rules](/sync/rules/overview).
+Dropping a table is not directly detected by PowerSync, and previous data may be preserved. To make sure the data is removed, `TRUNCATE` the table before dropping, or remove the table from your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
#### CREATE table
@@ -100,7 +100,7 @@ The latter can happen if:
When the replica identity changes, the entire table is re-replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again.
-Sync rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes.
+Sync Streams / Sync Rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes.
#### Column changes
@@ -142,11 +142,11 @@ Due to a limitation in the replication process, dropping a collection does not i
### Renaming Collections
-Renaming a synced collection to a name that _is not included_ in the Sync Rules has the same effect as dropping the collection.
+Renaming a synced collection to a name that _is not included_ in Sync Streams (or legacy Sync Rules) has the same effect as dropping the collection.
-Renaming an unsynced collection to a name that is included in the Sync Rules triggers an initial snapshot replication. The time required for this process depends on the collection size.
+Renaming an unsynced collection to a name that is included in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) triggers an initial snapshot replication. The time required for this process depends on the collection size.
-Circular renames (e.g., renaming `todos` → `todos_old` → `todos`) are not directly supported. To reprocess the database after such changes, a Sync Rules update must be deployed.
+Circular renames (e.g., renaming `todos` → `todos_old` → `todos`) are not directly supported. To reprocess the database after such changes, a [Sync Streams](/sync/streams/overview) update (or [Sync Rules](/sync/rules/overview)) must be deployed.
## MySQL (Beta) Specifics
@@ -164,9 +164,9 @@ The binary log also provides DDL (Data Definition Language) query updates, which
For MySQL, PowerSync detects schema changes by parsing the DDL queries in the binary log. It may not always be possible to parse the DDL queries correctly, especially if they are complex or use non-standard syntax.
In such cases, PowerSync will ignore the schema change, but will log a warning with the schema change query. If required, the schema change would then need to be manually
-handled by redeploying the sync rules. This triggers a re-replication.
+handled by redeploying your [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). This triggers a re-replication.
-### MySQL schema changes affecting Sync Rules
+### MySQL schema changes affecting Sync Streams
#### DROP table
@@ -205,7 +205,7 @@ The latter can happen if:
When the replication identity changes, the entire table is replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again.
-Sync rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes.
+Sync Streams / Sync Rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes.
#### Column changes
diff --git a/maintenance-ops/production-readiness-guide.mdx b/maintenance-ops/production-readiness-guide.mdx
index b2f1b80b..3f766491 100644
--- a/maintenance-ops/production-readiness-guide.mdx
+++ b/maintenance-ops/production-readiness-guide.mdx
@@ -262,11 +262,11 @@ Because PowerSync relies on Postgres logical replication, it's important to cons
The WAL growth rate is expected to increase substantially during the initial replication of large datasets with high update frequency, particularly for tables included in the PowerSync publication.
-During normal operation (after Sync Rules are deployed) the WAL growth rate is much smaller than the initial replication period, since the PowerSync Service can replicate ~5k operations per second, meaning the WAL lag is typically in the MB range as opposed to the GB range.
+During normal operation (after Sync Streams (or legacy Sync Rules) are deployed) the WAL growth rate is much smaller than the initial replication period, since the PowerSync Service can replicate ~5k operations per second, meaning the WAL lag is typically in the MB range as opposed to the GB range.
When deciding what to set the `max_slot_wal_keep_size` configuration parameter the following should be taken in account:
1. Database size - This impacts the time it takes to complete the initial replication from the source Postgres database.
-2. Sync Rules complexity - This also impacts the time it takes to complete the initial replication.
+2. Sync Streams (or legacy Sync Rules) complexity - This also impacts the time it takes to complete the initial replication.
3. Postgres update frequency - The frequency of updates (of tables included in the publication you create for PowerSync) during initial replication. The WAL growth rate is directly proportional to this.
To view the current replication slots that are being used by PowerSync you can run the following query:
@@ -287,12 +287,12 @@ FROM pg_settings
WHERE name = 'max_slot_wal_keep_size'
```
-It's recommended to check the current replication slot lag and `max_slot_wal_keep_size` when deploying Sync Rules changes to your PowerSync Service instance, especially when you're working with large database volumes.
+It's recommended to check the current replication slot lag and `max_slot_wal_keep_size` when deploying Sync Streams (or legacy Sync Rules) changes to your PowerSync Service instance, especially when you're working with large database volumes.
If you notice that the replication lag is greater than the current `max_slot_wal_keep_size` it's recommended to increase value of the `max_slot_wal_keep_size` on the connected source Postgres database to accommodate for the lag and to ensure the PowerSync Service can complete initial replication without further delays.
### Managing Replication Slots
-Under normal operating conditions when new Sync Rules are deployed to a PowerSync Service instance, a new replication slot will also be created and used for replication. The old replication slot from the previous version of the Sync Rules will still remain, until Sync Rules reprocessing is completed, at which point the old replication slot will be removed by the PowerSync Service.
+Under normal operating conditions when new Sync Streams (or legacy Sync Rules) are deployed to a PowerSync Service instance, a new replication slot will also be created and used for replication. The old replication slot from the previous version of the sync config will still remain, until reprocessing is completed, at which point the old replication slot will be removed by the PowerSync Service.
However, in some cases, a replication slot may remain without being used. Usually this happens when a PowerSync Service instance is de-provisioned, stopped intentionally or due to unexpected errors. This results in excessive disk usage due to the continued growth of the WAL.
To check which replication slots used by a PowerSync Service are no longer active, the following query can be executed against the source Postgres database:
diff --git a/maintenance-ops/self-hosting/aws-ecs.mdx b/maintenance-ops/self-hosting/aws-ecs.mdx
index c486e154..13bfdb35 100644
--- a/maintenance-ops/self-hosting/aws-ecs.mdx
+++ b/maintenance-ops/self-hosting/aws-ecs.mdx
@@ -20,7 +20,7 @@ Create your `powersync.yaml` configuration file following the [Self-Hosted Confi
Your configuration must include:
-- [Sync Rules](/sync/rules/overview): Define which data to sync to clients
+- [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)): Define which data to sync to clients
- [Client Auth](/configuration/auth/overview): Your authentication provider's JWKS
- [Source Database](/configuration/source-db/setup): Connection details for your source database
- [Bucket Storage](/configuration/powersync-service/self-hosted-instances#bucket-storage-database): Connection details for your bucket storage database. PowerSync supports MongoDB or Postgres as bucket storage databases. In this guide, we focus on MongoDB.
diff --git a/maintenance-ops/self-hosting/coolify.mdx b/maintenance-ops/self-hosting/coolify.mdx
index ff5ee8a9..60d93963 100644
--- a/maintenance-ops/self-hosting/coolify.mdx
+++ b/maintenance-ops/self-hosting/coolify.mdx
@@ -61,7 +61,7 @@ The easiest way to get started is to use **Supabase** as it provides all three.
The following configuration options should be updated:
- Environment variables
-- `sync_rules.yaml` file (according to your data requirements)
+- `sync-config.yaml` file (according to your data requirements)
- `powersync.yaml` file
@@ -224,16 +224,18 @@ The following Compose file serves as a universal starting point for deploying th
volumes:
- ./volumes/config:/home/config
- type: bind
- source: ./volumes/config/sync_rules.yaml
- target: /home/config/sync_rules.yaml
+ source: ./volumes/config/sync-config.yaml
+ target: /home/config/sync-config.yaml
content: |
- bucket_definitions:
- user_lists:
- # Separate bucket per To-Do list
- parameters: select id as list_id from lists where owner_id = request.user_id()
- data:
- - select * from lists where id = bucket.list_id
- - select * from todos where list_id = bucket.list_id
+ config:
+ edition: 3
+ streams:
+ user_list_data:
+ # Sync all lists and todos for the authenticated user
+ auto_subscribe: true
+ queries:
+ - SELECT * FROM lists WHERE owner_id = auth.user_id()
+ - SELECT * FROM todos WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
- type: bind
source: ./volumes/config/powersync.yaml
target: /home/config/powersync.yaml
@@ -304,9 +306,9 @@ The following Compose file serves as a universal starting point for deploying th
# The port which the PowerSync API server will listen on
port: !env PS_PORT
- # Specify sync rules
- sync_rules:
- path: /home/config/sync_rules.yaml
+ # Specify Sync Streams (or legacy Sync Rules)
+ sync_config:
+ path: /home/config/sync-config.yaml
# Client (application end user) authentication settings
client_auth:
@@ -363,7 +365,7 @@ The following Compose file serves as a universal starting point for deploying th
- Navigate to the `Storages` tab and update the `sync_rules.yaml` and `powersync.yaml` files as needed.
+ Navigate to the `Storages` tab and update the `sync-config.yaml` and `powersync.yaml` files as needed.
For more information see [Sync Rules](/sync/rules/overview) and
the skeleton config file in [Service Configuration](/configuration/powersync-service/self-hosted-instances).
@@ -377,7 +379,7 @@ The following Compose file serves as a universal starting point for deploying th
-
+
diff --git a/maintenance-ops/self-hosting/diagnostics.mdx b/maintenance-ops/self-hosting/diagnostics.mdx
index 1184bbcc..1d7a076d 100644
--- a/maintenance-ops/self-hosting/diagnostics.mdx
+++ b/maintenance-ops/self-hosting/diagnostics.mdx
@@ -7,7 +7,7 @@ All self-hosted PowerSync Service instances ship with a Diagnostics API.
This API provides the following diagnostic information:
- Connections → Connected backend source database and any active errors associated with the connection.
-- Active Sync Rules → Currently deployed sync rules and the status of the sync rules.
+- Active Sync Streams / Sync Rules → Currently deployed Sync Streams (or legacy Sync Rules) and its status.
# Configuration
diff --git a/maintenance-ops/self-hosting/migrating-instances.mdx b/maintenance-ops/self-hosting/migrating-instances.mdx
index e07e214c..368e4551 100644
--- a/maintenance-ops/self-hosting/migrating-instances.mdx
+++ b/maintenance-ops/self-hosting/migrating-instances.mdx
@@ -7,7 +7,7 @@ description: "Migrating users between PowerSync instances"
In some cases, you may want to migrate users between PowerSync instances. This may be between cloud and self-hosted instances, or even just to change the endpoint.
-If the PowerSync instances use the same source database and have the same basic configuration and sync rules, you can migrate users by just changing the endpoint to the new instance.
+If the PowerSync instances use the same source database and have the same basic configuration and Sync Streams (or legacy Sync Rules), you can migrate users by just changing the endpoint to the new instance.
To make this process easier, we recommend using an API to retrieve the PowerSync endpoint, instead of hardcoding the endpoint in the client application. If you're using custom authentication, this can be done in the same API call as getting the authentication token.
diff --git a/maintenance-ops/self-hosting/railway.mdx b/maintenance-ops/self-hosting/railway.mdx
index 1705f6d3..f29430d7 100644
--- a/maintenance-ops/self-hosting/railway.mdx
+++ b/maintenance-ops/self-hosting/railway.mdx
@@ -113,7 +113,7 @@ storage:
port: 80
-sync_rules:
+sync_config:
content: |
bucket_definitions:
global:
diff --git a/maintenance-ops/self-hosting/update-sync-rules.mdx b/maintenance-ops/self-hosting/update-sync-rules.mdx
index 4b62498a..43cef2e6 100644
--- a/maintenance-ops/self-hosting/update-sync-rules.mdx
+++ b/maintenance-ops/self-hosting/update-sync-rules.mdx
@@ -1,41 +1,59 @@
---
-title: "Update Sync Rules"
-description: "How to update sync rules in a self-hosted PowerSync deployment"
+title: "Update Sync Streams"
+sidebarTitle: "Update Sync Streams/Rules"
+description: "How to update Sync Streams (or legacy Sync Rules) in a self-hosted PowerSync deployment"
---
-There are two ways to update sync rules in a self-hosted deployment:
+There are two ways to update Sync Streams (or legacy Sync Rules) in a self-hosted deployment:
1. **Config file** - Update your config and restart the service
2. **API endpoint** - Deploy at runtime without restarting
- During deployment, existing sync rules continue serving clients while new
- rules process. Clients seamlessly transition once [initial
- replication](/architecture/powersync-service#initial-replication-vs-incremental-replication)
+ During deployment, the existing version of your Sync Streams / Sync Rules continue serving clients while the new version is processed.
+ Clients seamlessly transition once [initial replication](/architecture/powersync-service#initial-replication-vs-incremental-replication)
completes.
## Option 1: Config File (Recommended)
-Define sync rules in your `powersync.yaml` either inline or via a separate file. See [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances) for the full config reference and [Sync Rules](/sync/rules/overview) for syntax.
+Define Sync Streams (or legacy Sync Rules) in your `powersync.yaml` via a separate file (recommended) or inline.
+
+See [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances) for the full config reference and [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) for syntax.
- Update the `sync_rules` section in your `powersync.yaml`:
+ Update the `sync_config:` section in your `powersync.yaml`. The `sync_config:` key is used for both Sync Streams and Sync Rules:
- ```yaml Inline
- sync_rules:
+ ```yaml Sync Streams — Separate File (Recommended)
+ sync_config:
+ path: sync-config.yaml
+ ```
+
+ ```yaml Sync Streams — Inline
+ sync_config:
+ content: |
+ config:
+ edition: 3
+ streams:
+ users:
+ auto_subscribe: true
+ query: SELECT * FROM public.users
+ ```
+
+ ```yaml Sync Rules — Separate File (Legacy)
+ sync_config:
+ path: sync-config.yaml
+ ```
+
+ ```yaml Sync Rules — Inline (Legacy)
+ sync_config:
content: |
bucket_definitions:
global:
data:
- SELECT * FROM public.users
```
-
- ```yaml Separate File
- sync_rules:
- path: sync-rules.yaml
- ```
@@ -46,17 +64,17 @@ Define sync rules in your `powersync.yaml` either inline or via a separate file.
docker compose restart powersync
```
- Once the service starts up, it will load the updated sync rules and begin processing them while continuing to serve existing rules until initial replication completes.
+ Once the service starts up, it will load the updated Sync Streams / Sync Rules and begin processing them while continuing to serve the existing version until initial replication completes.
## Option 2: Deploy via API
-Deploy sync rules at runtime without restarting. Useful for quick iterations during development.
+Deploy Sync Streams (or legacy Sync Rules) at runtime without restarting. Useful for quick iterations during development.
- The API is disabled when sync rules are defined in `powersync.yaml`. Config
- file rules always take precedence.
+ The API is disabled when Sync Streams (or legacy Sync Rules) are defined in `powersync.yaml`.
+ Sync Streams (or legacy Sync Rules) defined in `powersync.yaml` always take precedence.
@@ -70,7 +88,7 @@ Deploy sync rules at runtime without restarting. Useful for quick iterations dur
```
-
+
```shell
curl -X POST http://:/api/sync-rules/v1/deploy \
-H "Content-Type: application/yaml" \
@@ -87,7 +105,7 @@ Deploy sync rules at runtime without restarting. Useful for quick iterations dur
| Endpoint | Method | Description |
| ------------------------------ | ------ | --------------------------------- |
-| `/api/sync-rules/v1/current` | GET | Get active and pending sync rules |
+| `/api/sync-rules/v1/current` | GET | Get active and pending Sync Streams / Sync Rules |
| `/api/sync-rules/v1/reprocess` | POST | Restart replication from scratch |
## Troubleshooting
@@ -96,9 +114,9 @@ Common errors when using the API:
| Error Code | Meaning |
| ------------- | --------------------------------------------------- |
-| `PSYNC_S4105` | Sync rules defined in config file - API is disabled |
-| `PSYNC_S4104` | No sync rules deployed yet |
-| `PSYNC_R0001` | Invalid sync rules YAML - check `details` field |
+| `PSYNC_S4105` | Sync Streams / Sync Rules defined in config file - API is disabled |
+| `PSYNC_S4104` | No Sync Streams / Sync Rules deployed yet |
+| `PSYNC_R0001` | Invalid Sync Streams / Sync Rules YAML - check `details` field |
See [Error Codes Reference](/debugging/error-codes) for the complete list.
diff --git a/migration-guides/atlas-device-sync.mdx b/migration-guides/atlas-device-sync.mdx
index 27f35f2a..f5a008da 100644
--- a/migration-guides/atlas-device-sync.mdx
+++ b/migration-guides/atlas-device-sync.mdx
@@ -16,7 +16,7 @@ PowerSync was spun off as a standalone product in 2023, and gives engineering te
PowerSync’s MongoDB connector has been **developed in collaboration with MongoDB** to provide an easy setup process. It reached **General Availability (GA) status** with its [V1 release](https://www.powersync.com/blog/powersyncs-mongodb-connector-hits-ga-with-version-1-0) and is fully supported for production use. Multiple MongoDB customers currently use PowerSync in production environments.
-The server-side [PowerSync Service](/architecture/powersync-service) connects to MongoDB and pre-processes and pre-indexes data to be efficiently synced to users based on defined _Sync Rules_. Client applications embedding the _PowerSync Client SDK_ connect to the PowerSync Service to sync only a relevant subset of data to each user, based on the Sync Rules. Incremental updates in MongoDB are synced to clients in real-time.
+The server-side [PowerSync Service](/architecture/powersync-service) connects to MongoDB and pre-processes and pre-indexes data to be efficiently synced to users based on defined _Sync Streams_ (or legacy _Sync Rules_). Client applications embedding the _PowerSync Client SDK_ connect to the PowerSync Service to sync only a relevant subset of data to each user, based on the Sync Streams (or legacy Sync Rules). Incremental updates in MongoDB are synced to clients in real-time.
Client applications get a SQLite database that they can read from and write to. PowerSync provides for bi-directional syncing so that mutations in the client-side SQLite database are automatically synced back to the source MongoDB database. If users are offline or have patchy connectivity, PowerSync automatically manages network failures and retries.
@@ -49,10 +49,10 @@ Here is a quick overview of the resulting PowerSync architecture:
* **Authentication**: PowerSync piggybacks off your app’s existing [authentication](/configuration/auth/overview), and JWTs are used to authenticate between clients and the PowerSync Service. If you are using Atlas Device SDKs for authentication, you will need to implement an authentication provider.
* **PowerSync Client SDKs** use **SQLite** under the hood. Even though MongoDB is a "NoSQL" document database, PowerSync’s use of SQLite works well with MongoDB, since the [PowerSync protocol](/architecture/powersync-protocol) is schemaless (it syncs schemaless JSON data) and we dynamically apply a [client-side schema](/intro/setup-guide#define-your-client-side-schema) to the data in SQLite using SQLite views. Client-side queries can be written in SQL or you can make use of an ORM (we provide a few [ORM integrations](https://www.powersync.com/blog/using-orms-with-powersync)). Working with embedded documents and arrays from MongoDB is easy with SQLite due to [its JSON support](/client-sdks/advanced/query-json-in-sqlite).
* **Reads vs Writes**: PowerSync handles syncing of reads differently from writes (mutations)
- * **Reads**: The PowerSync Service connects to your MongoDB database for real-time replication of data, and syncs data to clients based on the [Sync Rules](/sync/rules/overview) configuration. Sync Rules are more flexible than MongoDB Realm Flexible Sync, but are defined on the server-side, not on the client-side.
+ * **Reads**: The PowerSync Service connects to your MongoDB database for real-time replication of data, and syncs data to clients based on [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)). Sync Streams/Rules are more flexible than MongoDB Realm Flexible Sync, but are defined on the server-side, not on the client-side.
* **Writes**: The client-side application can perform writes (mutations) directly on the SQLite database. The PowerSync Client SDK automatically places those mutations into an [upload queue](/architecture/client-architecture#writing-data-via-sqlite-database-and-upload-queue) and invokes an `uploadData()` function (defined by you) as needed to upload those mutations sequentially to your backend application.
* **Authorization**: Authorization is controlled separately for reads vs. writes.
- * **Reads**: The [Sync Rules](/sync/rules/overview) control which users can access which data.
+ * **Reads**: The [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)) control which users can access which data.
* **Writes**: Your backend application controls authorization for how users can modify data, when it receives uploaded mutations from clients.
* **Backend Application**: PowerSync requires a backend API interface to upload mutations to MongoDB (and optionally for custom authentication too). There are currently two options:
* **"Bring your own backend"**: If you already have a backend application as part of your stack, you should use your existing backend. If you don’t yet have one, but would like to run your own backend, we have example implementations available. See the [instructions below](#2-accept-uploads-on-the-backend) for more details.
@@ -74,7 +74,7 @@ Follow the steps for MongoDB and your client platform/framework in our standard
* [Configure Your Source Database](/intro/setup-guide#1-configure-your-source-database)
* [Set Up PowerSync Service Instance](/intro/setup-guide#2-set-up-powersync-service-instance)
* [Connect PowerSync To Your Source Database](/intro/setup-guide#3-connect-powersync-to-your-source-database) (MongoDB)
-* [Define Basic Sync Rules](/intro/setup-guide#4-define-basic-sync-rules)
+* [Define Sync Streams or Sync Rules](/intro/setup-guide#4-define-sync-streams-or-sync-rules)
* [Generate a Development Token](/intro/setup-guide#5-generate-a-development-token)
* [Test Sync with the Sync Diagnostics Client](/intro/setup-guide#6-%5Boptional%5D-test-sync-with-the-sync-diagnostics-client)
* [Use the Client SDK](/intro/setup-guide#7-use-the-client-sdk)
diff --git a/resources/feature-status.mdx b/resources/feature-status.mdx
index cd64b7af..b7157fc6 100644
--- a/resources/feature-status.mdx
+++ b/resources/feature-status.mdx
@@ -52,7 +52,7 @@ Below is a summary of the current main PowerSync features and their release stat
| | |
| **PowerSync Service** | |
| Enterprise Self-Hosted | Closed Alpha |
-| Open Edition | Beta |
+| Sync Streams | Beta |
| Postgres Bucket Storage | V1 |
| | |
| **Client SDKs** | |
diff --git a/resources/hipaa.mdx b/resources/hipaa.mdx
index dbed0c1c..18612c2e 100644
--- a/resources/hipaa.mdx
+++ b/resources/hipaa.mdx
@@ -41,7 +41,7 @@ The customer remains the owner of their application, databases, and client devic
* **Data Filtering and Access Control**
- Customers must configure Sync Rules / Sync Streams to ensure only the minimum necessary ePHI is synchronized to specific client devices, and must ensure the authentication setup is correctly implemented to restrict data to the correct client devices.
+ Customers must configure Sync Streams / Sync Rules (legacy) to ensure only the minimum necessary ePHI is synchronized to specific client devices, and must ensure the authentication setup is correctly implemented to restrict data to the correct client devices.
* **Network Restrictions (IP Filtering, AWS Private Endpoints)**
Customers must use [AWS PrivateLink](/configuration/source-db/private-endpoints) where possible, or configure and restrict source database and bucket storage database access to PowerSync Cloud’s [IP addresses](/configuration/source-db/security-and-ip-filtering).
@@ -82,7 +82,7 @@ HIPAA compliance is a continuous, shared process between the customer and PowerS
| :---- | :---- | :---- |
| **Source Database** | Responsible for the security and HIPAA status of the source database hosting. | Responsible for the secure, encrypted connection to the database. |
| **Bucket Storage Database** | Responsible for the security and HIPAA status of the bucket storage database hosting. | Responsible for the secure, encrypted connection to the database. |
-| **Synchronization Service** | Responsible for proper configuration of Sync Rules / Streams data filtering to prevent unnecessary data exposure. | Responsible for securing the PowerSync Service infrastructure and ensuring data is encrypted while processed. |
+| **Synchronization Service** | Responsible for proper configuration of Sync Streams / Sync Rules data filtering to prevent unnecessary data exposure. | Responsible for securing the PowerSync Service infrastructure and ensuring data is encrypted while processed. |
| **Client Devices (e.g., Mobile App, Web App)** | **Wholly Responsible** for securing the client-side SQLite database, applying user authentication, authorization, and data purge policies on the device. | Responsible securing the client-side SDKs |
## Frequently Asked Questions
diff --git a/resources/local-first-software.mdx b/resources/local-first-software.mdx
index ab81fe13..7a2ffe48 100644
--- a/resources/local-first-software.mdx
+++ b/resources/local-first-software.mdx
@@ -53,7 +53,7 @@ Here's how applications built using PowerSync can be brought closer to the [7 id
| ------------------------------------------------------------------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| **Fast**: By accessing data locally, the software should be able to respond near-instantaneously to user input | PowerSync inherently provides this: All reads and writes use a local SQLite database, resulting in near-zero latency for accessing data. |
| **Multi-Device**: Data should be synchronized across all of the devices on which a user does their work. | PowerSync automatically syncs data to different user devices. |
-| **Offline**: The user should be able to read and write their data anytime, even while offline. | PowerSync allows for offline usage of applications for arbitrarily long periods of time. Developers can also optionally create apps as [offline-only](/client-sdks/advanced/local-only-usage) and turn on syncing of data when it suits them, including on a per-user basis.When syncing is configured, data is synced to users based on the [Sync Rules](/sync/rules/overview) configuration for offline access. Mutations to data while the user is offline are placed in an upload queue and periodically attempted to be [uploaded](/configuration/app-backend/client-side-integration) when connectivity is available (this is automatically managed by the PowerSync Client SDK). |
+| **Offline**: The user should be able to read and write their data anytime, even while offline. | PowerSync allows for offline usage of applications for arbitrarily long periods of time. Developers can also optionally create apps as [offline-only](/client-sdks/advanced/local-only-usage) and turn on syncing of data when it suits them, including on a per-user basis.When syncing is configured, data is synced to users based on the [Sync Streams](/sync/streams/overview) (or [Sync Rules](/sync/rules/overview)) for offline access. Mutations to data while the user is offline are placed in an upload queue and periodically attempted to be [uploaded](/configuration/app-backend/client-side-integration) when connectivity is available (this is automatically managed by the PowerSync Client SDK). |
| **Collaboration**: The ideal is to support real-time collaboration that is on par with the best cloud apps today. | PowerSync allows building collaborative applications either with [custom conflict resolution](/handling-writes/custom-conflict-resolution), or [using CRDT](/client-sdks/advanced/crdts) data structures stored as blob data for fine-grained collaboration. |
| **Longevity**: Work the user did with the software should continue to be accessible indefinitely, even after the company that produced the software is gone. | PowerSync relies on open-source and source-available software, meaning that the end-user can self-host Postgres (open-source) and the [PowerSync Service](/architecture/powersync-service) (source-available) should they wish to continue using PowerSync to sync data after the software producer shuts down backend services. There is also an onus on the software developer to ensure longevity, such as allowing exporting of data and avoiding reliance on other proprietary backend services. |
| **Privacy**: The software should use end-to-end encryption so that servers that store a copy of users’ files only hold encrypted data that they cannot read. | For details on end-to-end encryption with PowerSync, refer to our [Encryption](/client-sdks/advanced/data-encryption) section. |
diff --git a/resources/performance-and-limits.mdx b/resources/performance-and-limits.mdx
index 138dbd8d..837483b0 100644
--- a/resources/performance-and-limits.mdx
+++ b/resources/performance-and-limits.mdx
@@ -26,7 +26,7 @@ The PowerSync Cloud **Team** and **Enterprise** plans allow several of these lim
- **Small rows**: 2,000-4,000 operations per second
- **Large rows**: Up to 5MB per second
- **Transaction processing**: ~60 transactions per second for smaller transactions
-- **Reprocessing**: Same rates apply when reprocessing sync rules or adding new tables
+- **Reprocessing**: Same rates apply when reprocessing Sync Streams/Sync Rules or adding new tables
### Sync (PowerSync Service → Client)
diff --git a/resources/usage-and-billing.mdx b/resources/usage-and-billing.mdx
index 6131d437..f20f04b6 100644
--- a/resources/usage-and-billing.mdx
+++ b/resources/usage-and-billing.mdx
@@ -73,6 +73,6 @@ Usage limits for PowerSync Cloud are specified on our [Pricing page](https://www
Instances on the Free plan that have had no deploys or client connections for over 7 days will be deprovisioned. This helps us optimize our cloud resources and ensure a better experience for all users.
-If your instance is deprovisioned, you can easily restart it from the [PowerSync Dashboard](https://dashboard.powersync.com/) or [CLI](/tools/cli) by deploying Sync Rules to it. Note that this will reprocess your Sync Rules from scratch, causing data to re-sync to existing users.
+If your instance is deprovisioned, you can easily restart it from the [PowerSync Dashboard](https://dashboard.powersync.com/) or [CLI](/tools/cli) by deploying your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) to it. Note that this will reprocess it from scratch, causing data to re-sync to existing users.
For projects in production we recommend subscribing to a [paid plan](https://www.powersync.com/pricing) to avoid any interruptions. To upgrade to a paid plan, navigate to your organization in the [PowerSync Dashboard](https://dashboard.powersync.com/) and visit the **Plans & Billing** section.
diff --git a/resources/usage-and-billing/pricing-example.mdx b/resources/usage-and-billing/pricing-example.mdx
index 743946b8..593c2b4c 100644
--- a/resources/usage-and-billing/pricing-example.mdx
+++ b/resources/usage-and-billing/pricing-example.mdx
@@ -32,7 +32,7 @@ Data size, transfer and storage assumptions:
* **Messages are 0.25 KB in size on average.** 1KB can store around half a page’s worth of text. We assume the average message size on this app will be a quarter of that.
* **DAUs send and receive a combined total of 100 messages per day,** generating 100 rows in the messages table each day**.**
-* **Message data is only stored on local databases for three months.** Using PowerSync’s [Sync Rules](/sync/rules/overview), only messages sent and received in the last 3 months are stored in the local database embedded within a user’s app.
+* **Message data is only stored on local databases for three months.** Using PowerSync’s [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview), only messages sent and received in the last 3 months are stored in the local database embedded within a user’s app.
* **No attachments synced through PowerSync.** Attachments like files or photos are not synced through PowerSync.
* **1 PowerSync instance.** The backend database connects to a single PowerSync instance. A more typical setup may use 2 PowerSync instances: one for syncing from the staging database and one for the production database. Since staging data volumes are often negligible, we’ve ignored that in this example.
diff --git a/resources/usage-and-billing/usage-and-billing-faq.mdx b/resources/usage-and-billing/usage-and-billing-faq.mdx
index 6f375d83..dbcdc633 100644
--- a/resources/usage-and-billing/usage-and-billing-faq.mdx
+++ b/resources/usage-and-billing/usage-and-billing-faq.mdx
@@ -60,7 +60,7 @@ description: "Usage and billing FAQs and troubleshooting strategies."
The PowerSync Service hosts three types of data:
- 1. A current copy of the data, which should be roughly equal to the subset of your source data covered by your Sync Rules.
+ 1. A current copy of the data, which should be roughly equal to the subset of your source data covered by your Sync Streams (or legacy Sync Rules).
2. A history of all operations on data in buckets, which can be larger than the source since it includes history and one row can be in multiple buckets.
3. Data for parameter lookups, which is typically small.
@@ -143,7 +143,7 @@ The most common cause of excessive concurrent connections is opening multiple co
Sync operations are not billed in our updated pricing model, but they're useful for diagnosing spikes in data synced and understanding how data mutations affect usage.
-While sync operations typically correspond to data mutations on synced rows (those in your Sync Rules), several scenarios can affect your operation count:
+While sync operations typically correspond to data mutations on synced rows (those in your Sync Streams (or legacy Sync Rules)), several scenarios can affect your operation count:
### Key Scenarios
@@ -154,12 +154,12 @@ While sync operations typically correspond to data mutations on synced rows (tho
Compacting and defragmenting reduce operations history but trigger additional sync operations for existing users. See our [defragmenting guide](/maintenance-ops/compacting-buckets#defragmenting) to optimize this.
3. **Sync Rule Deployments:**
- When you deploy changes to Sync Rules, PowerSync recreates buckets from scratch. New app installations sync fewer operations since the operations history is reset, but existing users temporarily experience increased sync operations as they re-sync the updated buckets.
+ When you deploy changes to Sync Streams (or legacy Sync Rules), PowerSync recreates buckets from scratch. New app installations sync fewer operations since the operations history is reset, but existing users temporarily experience increased sync operations as they re-sync the updated buckets.
We're working on [incremental sync rule reprocessing](https://roadmap.powersync.com/c/85-more-efficient-sync-reprocessing), which will only reprocess buckets whose definitions have changed.
4. **Unsynced Columns:**
- Any row update triggers a new operation in the logical replication stream, regardless of which columns changed. PowerSync tracks changes at the row level, not the column level. This means updates to columns not included in your Sync Rules still create sync operations, and even a no-op update like `UPDATE mytable SET id = id` generates a new operation for each affected row.
+ Any row update triggers a new operation in the logical replication stream, regardless of which columns changed. PowerSync tracks changes at the row level, not the column level. This means updates to columns not included in your Sync Streams (or legacy Sync Rules) still create sync operations, and even a no-op update like `UPDATE mytable SET id = id` generates a new operation for each affected row.
Selectively syncing columns helps with data access control and reducing data transfer size, but it doesn't reduce the number of sync operations.
diff --git a/snippets/binary-type.mdx b/snippets/binary-type.mdx
new file mode 100644
index 00000000..ace29f83
--- /dev/null
+++ b/snippets/binary-type.mdx
@@ -0,0 +1,3 @@
+
+ Binary data can be accessed in the Sync Streams / Sync Rules, but cannot be used as [parameters](/sync/overview#how-it-works). To sync binary columns/fields to clients, those columns need to be converted to hex or base64 representation using the relevant [functions](/sync/supported-sql#functions).
+
\ No newline at end of file
diff --git a/snippets/dev-token-self-hosted-steps.mdx b/snippets/dev-token-self-hosted-steps.mdx
index 5018ef0a..6d456fc1 100644
--- a/snippets/dev-token-self-hosted-steps.mdx
+++ b/snippets/dev-token-self-hosted-steps.mdx
@@ -37,11 +37,11 @@
- Add the `client_auth` parameter to your `config.yaml`:
+ Add the `client_auth` section to your `config.yaml`:
- Copy the JWK values from mkjwk.org or the pem-jwk output, then add to your config:
+ Copy the JWK values from [mkjwk.org](https://mkjwk.org/) or the `pem-jwk` output, then add to your config:
```yaml config.yaml
# Client (application end user) authentication settings
@@ -97,8 +97,8 @@
```
Replace `test-user` with the user ID you want to authenticate:
- - If you're using **global Sync Rules**, you can use any value (e.g., `test-user`) since all data syncs to all users
- - If you're using **user-specific Sync Rules**, use a user ID that matches a user in your database (this will be used as `request.user_id()` in your Sync Rules)
+ - If your Sync Streams/Rules data isn't filtered by user (same data syncs to all users), you can use any value (e.g., `test-user`).
+ - If your data is filtered by parameters, use a user ID that matches a user in your database. PowerSync uses this (e.g. `auth.user_id()` in Sync Streams or `request.user_id()` in Sync Rules) to determine what to sync.
diff --git a/snippets/flutter-installation.mdx b/snippets/flutter-installation.mdx
deleted file mode 100644
index e69de29b..00000000
diff --git a/snippets/javascript-web/installation.mdx b/snippets/javascript-web/installation.mdx
index 984e0eff..ca544001 100644
--- a/snippets/javascript-web/installation.mdx
+++ b/snippets/javascript-web/installation.mdx
@@ -20,7 +20,7 @@ Add the [PowerSync Web NPM package](https://www.npmjs.com/package/@powersync/web
-**Required peer dependencies**
+**Install Peer Dependencies**
This SDK currently requires [`@journeyapps/wa-sqlite`](https://www.npmjs.com/package/@journeyapps/wa-sqlite) as a peer dependency. Install it in your app with:
diff --git a/snippets/local-only-escape.mdx b/snippets/local-only-escape.mdx
index 7fc631a3..e4cdfbb5 100644
--- a/snippets/local-only-escape.mdx
+++ b/snippets/local-only-escape.mdx
@@ -1,3 +1,3 @@
- **Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
+ **Note**: This section assumes you want to use PowerSync to sync your backend source database with SQLite in your app. If you only want to use PowerSync to manage your local SQLite database without sync, instantiate the PowerSync database without calling `connect()` and refer to our [Local-Only](/client-sdks/advanced/local-only-usage) guide.
\ No newline at end of file
diff --git a/snippets/node/installation.mdx b/snippets/node/installation.mdx
index 334acd91..c956546e 100644
--- a/snippets/node/installation.mdx
+++ b/snippets/node/installation.mdx
@@ -20,10 +20,9 @@ Add the [PowerSync Node NPM package](https://www.npmjs.com/package/@powersync/no
-**Peer dependencies**
+**Install Peer Dependencies**
-The PowerSync SDK for Node.js supports multiple drivers. More details are available under [encryption and custom drivers](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers),
-we currently recommend the `better-sqlite3` package for most users:
+The PowerSync SDK for Node.js supports multiple drivers. More details are available under [Encryption and Custom SQLite Drivers](/client-sdks/reference/node#encryption-and-custom-sqlite-drivers). We currently recommend the `better-sqlite3` package for most users:
@@ -49,10 +48,10 @@ we currently recommend the `better-sqlite3` package for most users:
Previous versions of the PowerSync SDK for Node.js used the `@powersync/better-sqlite3` fork as a
required peer dependency.
This is no longer recommended. After upgrading to `@powersync/node` version `0.12.0` or later, ensure
-the old package is no longer installed by running `@powersync/better-sqlite3`.
+the old package is no longer installed by running `npm uninstall @powersync/better-sqlite3`
-**Common installation issues**
+**Common Installation Issues**
The `better-sqlite` package requires native compilation, which depends on certain system tools.
Prebuilt assets are available and used by default, but a custom compilation may be started depending on the Node.js
diff --git a/snippets/react-native/installation.mdx b/snippets/react-native/installation.mdx
index ed09c93f..579216eb 100644
--- a/snippets/react-native/installation.mdx
+++ b/snippets/react-native/installation.mdx
@@ -18,7 +18,7 @@ Add the [PowerSync React Native NPM package](https://www.npmjs.com/package/@powe
-**Install peer dependencies**
+**Install Peer Dependencies**
PowerSync requires a SQLite database adapter. Choose between:
@@ -47,7 +47,7 @@ PowerSync requires a SQLite database adapter. Choose between:
```
-
+
The [@journeyapps/react-native-quick-sqlite](https://www.npmjs.com/package/@journeyapps/react-native-quick-sqlite) package is the original database adapter for React Native and therefore more battle-tested in production environments.
diff --git a/snippets/sdk-client-side-schema.mdx b/snippets/sdk-client-side-schema.mdx
index c3b87275..9b9c5c2f 100644
--- a/snippets/sdk-client-side-schema.mdx
+++ b/snippets/sdk-client-side-schema.mdx
@@ -1 +1 @@
-This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required.
\ No newline at end of file
+This refers to the schema for the managed SQLite database exposed by the PowerSync Client SDKs, that your app can read from and write to. The schema is applied when the database is instantiated (as we'll show in the next step) — no migrations are required.
\ No newline at end of file
diff --git a/snippets/stream-definition-reference.mdx b/snippets/stream-definition-reference.mdx
new file mode 100644
index 00000000..8740faf0
--- /dev/null
+++ b/snippets/stream-definition-reference.mdx
@@ -0,0 +1,32 @@
+```yaml
+config:
+ edition: 3
+
+streams:
+ :
+ # CTEs (optional) - define with block inside each stream
+ with:
+ : SELECT ... FROM ...
+
+ # Behavior options (place above query/queries)
+ auto_subscribe: true # Auto-subscribe clients on connect (default: false)
+ priority: 1 # Sync priority (optional). Lower number -> higher priority
+ accept_potentially_dangerous_queries: true # Silence security warnings (default: false)
+
+ # Query options (use one)
+ query: SELECT * FROM WHERE ... # Single query
+ queries: # Multiple queries
+ - SELECT * FROM WHERE ...
+ - SELECT * FROM WHERE ...
+
+
+```
+
+| Option | Default | Description |
+|--------|---------|-------------|
+| `query` | — | SQL-like query defining which data to sync. Use either `query` or `queries`, not both. See [Writing Queries](/sync/streams/queries). |
+| `queries` | — | Array of queries defining which data to sync. More efficient than defining separate streams: the client manages one subscription and PowerSync merges the data from all queries (see [Multiple Queries per Stream](/sync/streams/queries#multiple-queries-per-stream)). |
+| `with` | — | [CTEs](/sync/streams/ctes) available to this stream's queries. Define the `with` block inside each stream. |
+| `auto_subscribe` | `false` | When `true`, clients automatically subscribe on connect. |
+| `priority` | — | Sync priority (lower value = higher priority). See [Prioritized Sync](/sync/advanced/prioritized-sync). |
+| `accept_potentially_dangerous_queries` | `false` | Silences security warnings when queries use client-controlled parameters (i.e. _connection parameters_ and _subscription parameters_), as opposed to _authentication parameters_ that are signed as part of the JWT. Set to `true` only if you've verified the query is safe. See [Using Parameters](/sync/streams/parameters). |
diff --git a/sync/advanced/client-id.mdx b/sync/advanced/client-id.mdx
index bca6fd94..466b26ae 100644
--- a/sync/advanced/client-id.mdx
+++ b/sync/advanced/client-id.mdx
@@ -5,17 +5,17 @@ description: "On the client, PowerSync only supports a single primary key column
For tables where the client will create new rows:
-- Postgres and MySQL: use a UUID for `id`. Use the `uuid()` helper to generate a random UUID (v4) on the client.
+- Postgres, MySQL and SQL Server: use a UUID for `id`. Use the `uuid()` helper to generate a random UUID (v4) on the client.
- MongoDB: use an `ObjectId` for `_id`. Generate an `ObjectId()` in your app code and store it in the client's `id` column as a string; this will map to MongoDB's `_id`.
-To use a different column/field from the server-side database as the record ID on the client, use a column/field alias in your Sync Rules:
+To use a different column/field from the server-side database as the record ID on the client, use a column/field alias in your [Sync Streams](/sync/streams/overview) query (or [Sync Rules](/sync/rules/overview) data query):
```sql
SELECT client_id as id FROM my_data
```
- MongoDB uses `_id` as the name of the ID field in collections. Therefore, PowerSync requires using `SELECT _id as id` in [Sync Rule's](/sync/rules/overview) data queries when using MongoDB as the backend source database. When inserting new documents from the client, prefer `ObjectId` values for `_id` (stored in the client's `id` column).
+ MongoDB uses `_id` as the name of the ID field in collections. You must use `SELECT _id as id` (and include any other columns you need) in [Sync Streams](/sync/streams/overview) queries and [Sync Rules](/sync/rules/overview) data queries when using MongoDB as the backend source database. When inserting new documents from the client, prefer `ObjectId` values for `_id` (stored in the client's `id` column).
Custom transformations can also be used for the ID column. This is useful in certain scenarios for example when dealing with join tables, because PowerSync doesn't currently support composite primary keys. For example:
diff --git a/sync/advanced/compatibility.mdx b/sync/advanced/compatibility.mdx
index 613f1d8f..b8c1d19b 100644
--- a/sync/advanced/compatibility.mdx
+++ b/sync/advanced/compatibility.mdx
@@ -10,7 +10,7 @@ At the same time, we want to fix bugs or other inaccuracies that have accumulate
To make this trade‑off explicit, you choose whether to keep the existing behavior or turn on newer fixes that slightly change how data is processed.
-Use the `config` block in your Sync Rules YAML file to choose the behavior. There are two ways to turn fixes on:
+Use the `config` block in your sync config YAML to choose the behavior. There are two ways to turn fixes on:
1. Set an `edition` to enable the full set of fixes for that edition. This is the recommended approach for new projects.
2. Toggle individual options for more fine‑grained control.
@@ -21,17 +21,17 @@ For older projects, the previous behavior remains the default. New projects shou
For new projects, it is recommended to enable all current fixes by setting `edition: `:
-```yaml sync_rules.yaml
+```yaml
config:
- edition: 2 # Recommended to set to the latest available edition (see 'Supported fixes' table below)
+ edition: 3 # Recommended to set to the latest available edition (see 'Supported fixes' table below)
-bucket_definitions:
+streams:
# ...
```
Or, specify options individually:
-```yaml sync_rules.yaml
+```yaml
config:
timestamps_iso8601: true
versioned_bucket_ids: true
@@ -39,6 +39,24 @@ config:
custom_postgres_types: true
```
+## Sync Streams Requirement
+
+**New Sync Streams configurations should use `edition: 3`**, which enables the new compiler with an expanded SQL feature set (including `JOIN`, CTEs, multiple queries per stream, `BETWEEN`, `CASE`, and more):
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ my_stream:
+ query: SELECT * FROM my_table WHERE user_id = auth.user_id()
+```
+
+
+**Upgrading from alpha**: If you have existing Sync Streams using `edition: 2`, updgrade to `edition: 3` to enable the new compiler with an expanded SQL feature set (including `JOIN`, CTEs, multiple queries per stream, `BETWEEN`, `CASE`, and more). See [Supported SQL](/sync/supported-sql) for the full list of supported features.
+
+
+
## Supported fixes
This table lists all fixes currently supported:
@@ -72,9 +90,9 @@ You can use the `timestamp_max_precision` option to configure the actual precisi
For instance, a Postgres timestamp value would sync as `2025-09-22T14:29:30.000000` by default.
If you don't want that level of precision, you can use the following options to make it sync as `2025-09-22T14:29:30.000`:
-```yaml sync_rules.yaml
+```yaml sync-config.yaml
config:
- edition: 2
+ edition: 3
timestamp_max_precision: milliseconds
```
@@ -106,7 +124,7 @@ downloaded twice.
### `fixed_json_extract`
-This fixes the `json_extract` functions as well as the `->` and `->>` operators in sync rules to behave similar
+This fixes the `json_extract` functions as well as the `->` and `->>` operators in Sync Rules to behave similar
to recent SQLite versions: We only split on `.` if the path starts with `$.`.
For instance, `'json_extract({"foo.bar": "baz"}', 'foo.bar')` would evaluate to:
diff --git a/sync/advanced/multiple-client-versions.mdx b/sync/advanced/multiple-client-versions.mdx
index 6313da32..cb7c99a2 100644
--- a/sync/advanced/multiple-client-versions.mdx
+++ b/sync/advanced/multiple-client-versions.mdx
@@ -3,27 +3,46 @@ title: "Multiple Client Versions"
description: "In some cases, different client versions may need different output schemas."
---
-When schema changes are additive, old clients would just ignore the new tables and columns, and no special handling is required. However, in some cases, the schema changes may be more drastic and may need separate Sync Rules based on the client version.
+When schema changes are additive, old clients would just ignore the new tables and columns, and no special handling is required. However, in some cases, the schema changes may be more drastic and may need separate Sync Streams (or Sync Rules) based on the client version.
-To distinguish between client versions, we can pass in an additional[ client parameter](/sync/rules/client-parameters) from the client to the PowerSync Service instance. These parameters could be used to implement different logic based on the client version.
+To distinguish between client versions, clients can pass version information to the PowerSync Service. In [Sync Streams](/sync/streams/overview), these are called connection parameters (accessed via `connection.parameter()`). In legacy [Sync Rules](/sync/rules/overview), these are called [client parameters](/sync/rules/client-parameters).
Example to use different table names based on the client's `schema_version`:
-```yaml
-# Client passes in: "params": {"schema_version": }
- assets_v1:
- parameters: SELECT request.user_id() AS user_id
- WHERE request.parameters() ->> 'schema_version' = '1'
- data:
- - SELECT * FROM assets AS assets_v1 WHERE user_id = bucket.user_id
+
+
+ ```yaml
+ # Client passes connection params: {"schema_version": }
+ streams:
+ assets_v1:
+ query: SELECT * FROM assets AS assets_v1
+ WHERE user_id = auth.user_id()
+ AND connection.parameter('schema_version') = '1'
- assets_v2:
- parameters: SELECT request.user_id() AS user_id
- WHERE request.parameters() ->> 'schema_version' = '2'
- data:
- - SELECT * FROM assets AS assets_v2 WHERE user_id = bucket.user_id
-```
+ assets_v2:
+ query: SELECT * FROM assets AS assets_v2
+ WHERE user_id = auth.user_id()
+ AND connection.parameter('schema_version') = '2'
+ ```
+
+
+ ```yaml
+ # Client passes in: "params": {"schema_version": }
+ assets_v1:
+ parameters: SELECT request.user_id() AS user_id
+ WHERE request.parameters() ->> 'schema_version' = '1'
+ data:
+ - SELECT * FROM assets AS assets_v1 WHERE user_id = bucket.user_id
+
+ assets_v2:
+ parameters: SELECT request.user_id() AS user_id
+ WHERE request.parameters() ->> 'schema_version' = '2'
+ data:
+ - SELECT * FROM assets AS assets_v2 WHERE user_id = bucket.user_id
+ ```
+
+
- Handle queries based on parameters set by the client with care. The client can send any value for these parameters, so it's not a good place to do authorization. If the parameter must be authenticated, use parameters from the JWT instead. Read more: [Security consideration](/sync/rules/client-parameters#security-consideration)
+ Handle queries based on parameters set by the client with care. The client can send any value for these parameters, so it's not a good place to do authorization. If the parameter must be authenticated, use parameters from the JWT instead.
diff --git a/sync/advanced/overview.mdx b/sync/advanced/overview.mdx
index ad86d6ce..bb8f3343 100644
--- a/sync/advanced/overview.mdx
+++ b/sync/advanced/overview.mdx
@@ -1,6 +1,6 @@
---
title: "Advanced Topics"
-description: "Advanced topics relating to Sync Rules / Sync Streams."
+description: "Advanced topics relating to Sync Streams / Sync Rules."
sidebarTitle: Overview
---
@@ -9,7 +9,7 @@ sidebarTitle: Overview
-
+
diff --git a/sync/advanced/partitioned-tables.mdx b/sync/advanced/partitioned-tables.mdx
index 1846c016..f568d38e 100644
--- a/sync/advanced/partitioned-tables.mdx
+++ b/sync/advanced/partitioned-tables.mdx
@@ -3,25 +3,52 @@ title: "Partitioned Tables (Postgres)"
description: "Partitioned tables and wildcard table name matching"
---
-For partitioned tables in Postgres, each individual partition is replicated and processed using Sync Rules.
+For partitioned tables in Postgres, each individual partition is replicated and processed using [Sync Streams](/sync/streams/overview) (or legacy [Sync Rules](/sync/rules/overview)).
To use the same queries and same output table name for each partition, use `%` for wildcard suffix matching of the table name:
-```yaml
- by_user:
- # Use wildcard in a parameter query
- parameters: SELECT id AS user_id FROM "users_%"
- data:
- # Use wildcard in a data query
- - SELECT * FROM "todos_%" AS todos WHERE user_id = bucket.user_id
-```
+
+
+
+ ```yaml
+ streams:
+ user_todos:
+ queries:
+ # Wildcard matches all user partition tables (e.g. users_2024, users_2025)
+ - SELECT * FROM "users_%" WHERE id = auth.user_id()
+ # Wildcard matches all todo partition tables (e.g. todos_2024, todos_2025)
+ - SELECT * FROM "todos_%" AS todos WHERE user_id = auth.user_id()
+ ```
+
+
+ ```yaml
+ by_user:
+ # Use wildcard in a parameter query
+ parameters: SELECT id AS user_id FROM "users_%"
+ data:
+ # Use wildcard in a data query
+ - SELECT * FROM "todos_%" AS todos WHERE user_id = bucket.user_id
+ ```
+
+
The wildcard character can only be used as the last character in the table name.
-When using wildcard table names, the original table suffix is available in the special `_table_suffix` column:
+When using wildcard table names, the original table suffix is available in the special `_table_suffix` column. This works the same way in both Sync Streams and Sync Rules:
-```sql
-SELECT * FROM "todos_%" AS todos WHERE _table_suffix != 'archived'
-```
+
+
+ ```yaml
+ streams:
+ active_todos:
+ query: SELECT * FROM "todos_%" AS todos WHERE _table_suffix != 'archived'
+ ```
+
+
+ ```sql
+ SELECT * FROM "todos_%" AS todos WHERE _table_suffix != 'archived'
+ ```
+
+
When no table alias is provided, the original table name is preserved.
diff --git a/sync/advanced/prioritized-sync.mdx b/sync/advanced/prioritized-sync.mdx
index e0267b1d..305913ab 100644
--- a/sync/advanced/prioritized-sync.mdx
+++ b/sync/advanced/prioritized-sync.mdx
@@ -5,7 +5,9 @@ description: "In some scenarios, you may want to sync tables using different pri
## Overview
-PowerSync supports defining sync priorities, which allows you to control the sync order for different buckets of data. This is particularly useful when certain data should be available sooner than others.
+PowerSync supports defining sync priorities, which allows you to control the sync order for different data. This is particularly useful when certain data should be available sooner than others.
+
+In Sync Streams, priorities are assigned to streams and PowerSync manages the underlying buckets internally. (In legacy Sync Rules, priorities were assigned to buckets explicitly.)
**Availability**
@@ -36,27 +38,93 @@ Each bucket is assigned a priority value between 0 and 3, where:
- 3 is the default and lowest priority.
- Lower numbers indicate higher priority.
-Buckets with higher priorities sync first, and lower-priority buckets sync later. It's worth noting that if you only use a single priority, there is no difference between priorities 1-3. The difference only comes in if you use multiple different priorities.
+Higher-priority data syncs first, and lower-priority data syncs later. If you only use a single priority, there is no difference between priorities 1-3. The difference only comes in when you use multiple different priorities.
+
+
+
+In Sync Streams, you assign priorities directly to streams. PowerSync manages buckets internally, so you don't need to think about bucket structure. Each stream with a given priority will have its data synced at that priority level.
+
+```yaml
+streams:
+ lists:
+ auto_subscribe: true
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+ priority: 1 # Syncs first
+
+ todos:
+ auto_subscribe: true
+ query: SELECT * FROM todos WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+ priority: 2 # Syncs after lists
+```
+
+Clients can also override the priority when subscribing:
+
+```js
+// Override the stream's default priority for this subscription
+const sub = await db.syncStream('todos', { list_id: 'abc' }).subscribe({ priority: 1 });
+```
+
+When different components subscribe to the same stream with the same parameters but different priorities, PowerSync uses the highest priority for syncing. That higher priority is kept until the subscription ends (or its TTL expires). Subscriptions with different parameters are independent and do not conflict.
+
+
+In Sync Rules, you assign priorities to bucket definitions. The priority determines when data in that bucket syncs relative to other buckets.
+
+```yaml
+bucket_definitions:
+ user_lists:
+ priority: 1 # Syncs first
+ parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
+ data:
+ - SELECT * FROM lists WHERE id = bucket.list_id
+
+ user_todos:
+ priority: 2 # Syncs after lists
+ parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
+ data:
+ - SELECT * FROM todos WHERE list_id = bucket.list_id
+```
+
+
## Syntax and Configuration
-Priorities can be defined for a bucket using the `priority` YAML key, or with the `_priority` attribute inside parameter queries:
+
+
+In Sync Streams, set the `priority` option on the stream definition:
+
+```yaml
+streams:
+ high_priority_data:
+ auto_subscribe: true
+ query: SELECT * FROM important_table WHERE user_id = auth.user_id()
+ priority: 1
+
+ low_priority_data:
+ auto_subscribe: true
+ query: SELECT * FROM background_table WHERE user_id = auth.user_id()
+ priority: 2
+```
+
+
+In Sync Rules, priorities can be defined using the `priority` YAML key on bucket definitions, or with the `_priority` attribute inside parameter queries:
```yaml
bucket_definitions:
# Using the `priority` YAML key
user_data:
priority: 1
- parameters: SELECT request.user_id() as id where...;
+ parameters: SELECT request.user_id() AS id WHERE ...
data:
# ...
- # Using the `_priority` attribute
+ # Using the `_priority` attribute (useful for multiple parameter queries with different priorities)
project_data:
- parameters: select id as project_id, 2 as _priority from projects where ...; # This approach is useful when you have multiple parameter queries with different priorities.
+ parameters: SELECT id AS project_id, 2 AS _priority FROM projects WHERE ...
data:
# ...
-```
+```
+
+
Priorities must be static and cannot depend on row values within a parameter query.
@@ -64,38 +132,57 @@ Priorities must be static and cannot depend on row values within a parameter que
## Example: Syncing Lists Before Todos
-Consider a scenario where you want to display lists immediately while loading todos in the background. This approach allows users to view and interact with lists right away without waiting for todos to sync. Here's how to configure sync priorities in your Sync Rules to achieve this:
+Consider a scenario where you want to display lists immediately while loading todos in the background. This approach allows users to view and interact with lists right away without waiting for todos to sync.
+
+
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ lists:
+ auto_subscribe: true
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+ priority: 1 # Syncs first
+
+ todos:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM todos
+ WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+ priority: 2 # Syncs after lists
+```
+The `lists` stream syncs first (priority 1), allowing users to see and interact with their lists immediately. The `todos` stream syncs afterward (priority 2), loading in the background.
+
+
```yaml
bucket_definitions:
user_lists:
- # Sync the user's lists with a higher priority
- priority: 1
- parameters: select id as list_id from lists where user_id = request.user_id()
+ priority: 1 # Syncs first
+ parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
data:
- - select * from lists where id = bucket.list_id
+ - SELECT * FROM lists WHERE id = bucket.list_id
user_todos:
- # Sync the user's todos with a lower priority
- priority: 2
- parameters: select id as list_id from lists where user_id = request.user_id()
+ priority: 2 # Syncs after lists
+ parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id()
data:
- - select * from todos where list_id = bucket.list_id
+ - SELECT * FROM todos WHERE list_id = bucket.list_id
```
-In this configuration:
-
-The `lists` bucket has the default priority of 1, meaning it syncs first.
-
-The `todos` bucket is assigned a priority of 2, meaning it may sync only after the lists have been synced.
+The `user_lists` bucket syncs first (priority 1), allowing users to see and interact with their lists immediately. The `user_todos` bucket syncs afterward (priority 2), loading in the background.
+
+
## Behavioral Considerations
-- **Interruption for Higher Priority Data**: Syncing lower-priority buckets _may_ be interrupted if new data for higher-priority buckets arrives.
-- **Local Changes & Consistency**: If local writes fail due to validation or permission issues, they are only reverted after _all_ buckets sync.
-- **Deleted Data**: Deleted data may only be removed after _all_ buckets have synced. Future updates may improve this behavior.
-- **Data Ordering**: Data in lower-priority buckets will never appear before higher-priority data.
+- **Interruption for Higher Priority Data**: Syncing lower-priority data _may_ be interrupted if new data for higher-priority streams/buckets arrives.
+- **Local Changes & Consistency**: If local writes fail due to validation or permission issues, they are only reverted after _all_ data has synced.
+- **Deleted Data**: Deleted data may only be removed after _all_ priorities have completed syncing. Future updates may improve this behavior.
+- **Data Ordering**: Lower-priority data will never appear before higher-priority data.
## Special Case: Priority 0
@@ -107,9 +194,9 @@ Caution: If misused, Priority 0 may cause flickering or inconsistencies, as upda
## Consistency Considerations
-PowerSync's full consistency guarantees only apply once all buckets have completed syncing.
+PowerSync's full consistency guarantees only apply once all priorities have completed syncing.
-When higher-priority buckets are synced, all inserts and updates within the buckets for the specific priority will be consistent. However, deletes are only applied when the full sync completes, so you may still have some stale data within those buckets.
+When higher-priority data is synced, all inserts and updates at that priority level will be consistent. However, deletes are only applied when the full sync completes, so you may still have some stale data at those priority levels.
Consider the following example:
@@ -132,7 +219,7 @@ PowerSync's client SDKs provide APIs to allow applications to track sync status
Using the above we can render a lists component only once the user's lists (with priority 1) have completed syncing, else display a message indicating that the sync is still in progress:
```dart
- // Define the priority level of the lists bucket
+ // Define the priority level for lists
static final _listsPriority = BucketPriority(1);
@override
diff --git a/sync/advanced/sharded-databases.mdx b/sync/advanced/sharded-databases.mdx
index 662f8f64..a78f2a68 100644
--- a/sync/advanced/sharded-databases.mdx
+++ b/sync/advanced/sharded-databases.mdx
@@ -22,9 +22,9 @@ Some specific scenarios:
#### 1\. Different tables on different databases
-This is common when separate "services" use separate databases, but multiple tables across those databases need to be synchronized to the same users.
+This is common when separate "services" use separate databases, but multiple tables across those databases need to be synced to the same users.
-Use a single PowerSync Service instance, with a separate connection for each source database ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release). Use a unique [connection tag](/sync/advanced/schemas-and-connections) for each source database, allowing them to be distinguished in the Sync Rules.
+Use a single PowerSync Service instance, with a separate connection for each source database ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release). Use a unique [connection tag](/sync/advanced/schemas-and-connections) for each source database, allowing them to be distinguished in your [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview).
#### 2a. All data for any single customer is contained in a single shard
@@ -40,4 +40,4 @@ If the amount of shared data is small, still use a separate PowerSync Service in
In some cases, most tables would be on a shared server, with only a few large tables being sharded.
-For this case, use a single PowerSync Service instance. Add each shard as a new connection on this instance ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release) — all with the same connection tag, so that the same Sync Rules apply to each.
+For this case, use a single PowerSync Service instance. Add each shard as a new connection on this instance ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release) — all with the same connection tag, so that the same [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) applies to each.
diff --git a/sync/advanced/sync-data-by-time.mdx b/sync/advanced/sync-data-by-time.mdx
index df3ca74f..a3c01296 100644
--- a/sync/advanced/sync-data-by-time.mdx
+++ b/sync/advanced/sync-data-by-time.mdx
@@ -1,6 +1,6 @@
---
title: "Guide: Sync Data by Time"
-description: "Learn how to sync data by time in Sync Rules."
+description: "Learn how to sync data by time using Sync Streams or legacy Sync Rules."
sidebarTitle: "Sync Data by Time"
---
@@ -8,21 +8,27 @@ A common need in offline-first apps is syncing data based on time, for example,
You might expect to write something like:
```yaml
-bucket_definitions
- issues_after_start_date:
- parameters: SELECT request.parameters() ->> 'start_at' as start_at
- data: SELECT * FROM issues WHERE updated_at > bucket.start_date
+# Sync Streams
+streams:
+ issues_after_start_date:
+ query: SELECT * FROM issues WHERE updated_at > subscription.parameter('start_at')
+
+# Sync Rules
+bucket_definitions:
+ issues_after_start_date:
+ parameters: SELECT request.parameters() ->> 'start_at' as start_at
+ data: SELECT * FROM issues WHERE updated_at > bucket.start_date
```
However, this won't work. Here's why.
# The Problem
-Sync rules only support a limited set of [operators](https://docs.powersync.com/usage/sync-rules/operators-and-functions) when filtering on parameters. You can use `=`, `IN`, and `IS NULL`, but not range operators like `>`, `<`, `>=`, or `<=`.
+PowerSync pre-computes and caches which rows belong to which parameters to enable efficient streaming. This means parameter-based filtering is limited to equality checks (`=`, `IN`, `IS NULL`) — range operators like `>`, `<`, `>=`, or `<=` are not supported on parameters.
-Additionally, sync rule functions must be deterministic. Time-based functions like `now()` aren't allowed because the result changes depending on when the query runs.
+Additionally, time-based functions like `now()` aren't allowed in parameter expressions because the result changes depending on when the query runs, making pre-computation impossible.
-These constraints exist for good reason, they ensure buckets can be pre-computed and cached efficiently. But they make time-based filtering less obvious to implement.
+These constraints apply to both Sync Streams and legacy Sync Rules.
This guide covers a few practical workarounds.
@@ -45,26 +51,73 @@ Update it periodically using a cron job (e.g., with pg_cron):
UPDATE issues SET updated_this_week = (updated_at > now() - interval '7 days');
```
-```yaml
-bucket_definitions:
- recent_issues:
- data:
- - SELECT * FROM issues WHERE updated_this_week = true
-```
-For multiple time ranges, add multiple columns and let the client choose which bucket to sync:
-
-```yaml
-bucket_definitions:
- issues_1week:
- parameters: SELECT WHERE request.parameters() ->> 'range' = '1week'
- data:
- - SELECT * FROM issues WHERE updated_this_week = true
-
- issues_1month:
- parameters: SELECT WHERE request.parameters() ->> 'range' = '1month'
- data:
- - SELECT * FROM issues WHERE updated_this_month = true
-```
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ recent_issues:
+ auto_subscribe: true
+ query: SELECT * FROM issues WHERE updated_this_week = true
+ ```
+
+ For multiple time ranges, define a stream per range and let the client subscribe to the one it needs:
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ issues_1week:
+ query: SELECT * FROM issues WHERE updated_this_week = true
+
+ issues_1month:
+ query: SELECT * FROM issues WHERE updated_this_month = true
+ ```
+
+ The client subscribes to the desired range:
+
+ ```javascript
+ // Subscribe to one-week range
+ await db.syncStream('issues_1week').subscribe();
+ // Or subscribe to one-month range
+ await db.syncStream('issues_1month').subscribe();
+ ```
+
+
+ ```yaml
+ bucket_definitions:
+ recent_issues:
+ data:
+ - SELECT * FROM issues WHERE updated_this_week = true
+ ```
+
+ For multiple time ranges, add multiple bucket definitions and let the client choose which bucket to sync:
+
+ ```yaml
+ bucket_definitions:
+ issues_1week:
+ parameters: SELECT WHERE request.parameters() ->> 'range' = '1week'
+ data:
+ - SELECT * FROM issues WHERE updated_this_week = true
+
+ issues_1month:
+ parameters: SELECT WHERE request.parameters() ->> 'range' = '1month'
+ data:
+ - SELECT * FROM issues WHERE updated_this_month = true
+ ```
+
+ The client passes the desired range as a client parameter:
+
+ ```javascript
+ await db.connect(connector, {
+ params: {
+ range: '1week',
+ },
+ })
+ ```
+
+
This approach works well when you have a small, fixed set of time ranges. However, it requires schema changes and a scheduled job to keep the columns updated.
@@ -80,22 +133,46 @@ Instead of pre-defined ranges, create a bucket for each date and let the client
Use `substring` to extract the date portion from a timestamp and match it with `=`:
-```sql
-bucket_definitions:
- issues_by_update_at:
- parameters: SELECT value as date FROM json_each(request.parameters() ->> 'dates')
- data:
- - SELECT * FROM issues WHERE substring(updated_at, 1, 10) = bucket.date
-```
-The client then passes the dates it wants as connection params:
-
-```javascript
-await db.connect(connector, {
- params: {
- dates: ["2026-01-07", "2026-01-08", "2026-01-09"],
- },
-})
-```
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ issues_by_date:
+ query: SELECT * FROM issues WHERE substring(updated_at, 1, 10) = subscription.parameter('date')
+ ```
+
+ The client subscribes once per date it wants to sync:
+
+ ```javascript
+ await db.syncStream('issues_by_date', { date: '2026-01-07' }).subscribe();
+ await db.syncStream('issues_by_date', { date: '2026-01-08' }).subscribe();
+ await db.syncStream('issues_by_date', { date: '2026-01-09' }).subscribe();
+ ```
+
+ Each subscription can be managed independently — you can subscribe and unsubscribe to individual dates without affecting others.
+
+
+ ```yaml
+ bucket_definitions:
+ issues_by_update_at:
+ parameters: SELECT value as date FROM json_each(request.parameters() ->> 'dates')
+ data:
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 10) = bucket.date
+ ```
+
+ The client passes the dates it wants as client parameters:
+
+ ```javascript
+ await db.connect(connector, {
+ params: {
+ dates: ["2026-01-07", "2026-01-08", "2026-01-09"],
+ },
+ })
+ ```
+
+
This gives users full control over which dates to sync, with no schema changes or scheduled jobs required.
@@ -109,38 +186,71 @@ You have to pick a granularity and stick with it. If that's a problem—say, you
## 3: Multiple Granularities
-Combine multiple granularities in a single bucket definition. This lets you use larger buckets (days) for older data and smaller buckets (hours, minutes) for recent data.
-
-```yaml
-bucket_definitions:
- issues_by_time:
- parameters: SELECT value as partition FROM json_each(request.parameters() ->> 'partitions')
- data:
- # By day (e.g., "2026-01-07")
- - SELECT * FROM issues WHERE substring(updated_at, 1, 10) = bucket.partition
- # By hour (e.g., "2026-01-07T14")
- - SELECT * FROM issues WHERE substring(updated_at, 1, 13) = bucket.partition
- # By 10 minutes (e.g., "2026-01-07T14:3")
- - SELECT * FROM issues WHERE substring(updated_at, 1, 15) = bucket.partition
-```
-
-The client then mixes granularities as needed:
-
-```javascript
-await db.connect(connector, {
- params: {
- partitions: [
- "2026-01-05",
- "2026-01-06",
- "2026-01-07T10",
- "2026-01-07T11",
- "2026-01-07T12:0",
- "2026-01-07T12:1",
- "2026-01-07T12:2"
- ]
- },
-})
-```
+Combine multiple granularities in a single definition. This lets you use larger buckets (days) for older data and smaller buckets (hours, minutes) for recent data.
+
+
+
+ ```yaml
+ config:
+ edition: 3
+ streams:
+ issues_by_partition:
+ queries:
+ # By day (e.g., "2026-01-07")
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 10) = subscription.parameter('partition')
+ # By hour (e.g., "2026-01-07T14")
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 13) = subscription.parameter('partition')
+ # By 10 minutes (e.g., "2026-01-07T14:3")
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 15) = subscription.parameter('partition')
+ ```
+
+ The client subscribes once per partition, mixing granularities as needed:
+
+ ```javascript
+ await db.syncStream('issues_by_partition', { partition: '2026-01-05' }).subscribe();
+ await db.syncStream('issues_by_partition', { partition: '2026-01-06' }).subscribe();
+ await db.syncStream('issues_by_partition', { partition: '2026-01-07T10' }).subscribe();
+ await db.syncStream('issues_by_partition', { partition: '2026-01-07T11' }).subscribe();
+ await db.syncStream('issues_by_partition', { partition: '2026-01-07T12:0' }).subscribe();
+ await db.syncStream('issues_by_partition', { partition: '2026-01-07T12:1' }).subscribe();
+ await db.syncStream('issues_by_partition', { partition: '2026-01-07T12:2' }).subscribe();
+ ```
+
+ Each query naturally acts as a filter based on the length of the partition value — a day-format partition only matches the day query, an hour-format partition only matches the hour query, and so on.
+
+
+ ```yaml
+ bucket_definitions:
+ issues_by_time:
+ parameters: SELECT value as partition FROM json_each(request.parameters() ->> 'partitions')
+ data:
+ # By day (e.g., "2026-01-07")
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 10) = bucket.partition
+ # By hour (e.g., "2026-01-07T14")
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 13) = bucket.partition
+ # By 10 minutes (e.g., "2026-01-07T14:3")
+ - SELECT * FROM issues WHERE substring(updated_at, 1, 15) = bucket.partition
+ ```
+
+ The client then mixes granularities as needed:
+
+ ```javascript
+ await db.connect(connector, {
+ params: {
+ partitions: [
+ "2026-01-05",
+ "2026-01-06",
+ "2026-01-07T10",
+ "2026-01-07T11",
+ "2026-01-07T12:0",
+ "2026-01-07T12:1",
+ "2026-01-07T12:2"
+ ]
+ },
+ })
+ ```
+
+
This syncs January 5–6 by day, the morning of January 7 by hour, and the last 30 minutes in 10-minute chunks, without creating hundreds of buckets.
@@ -156,11 +266,11 @@ Each row belongs to multiple buckets (replication overhead). Re-sync overhead wh
# Conclusion
-Time-based sync is a common need, but current sync rules don't support range operators or time-based functions directly.
+Time-based sync is a common need, but PowerSync doesn't support range operators or time-based functions on parameters directly.
To recap the workarounds:
- **Pre-defined time ranges** — Simplest option. Use when you have a fixed set of time ranges and don't mind schema changes.
- **Buckets Per Date** — More flexible. Use when you need arbitrary date ranges but can live with a single granularity.
- **Multiple Granularities** — Most flexible. Use when you need precision for recent data without syncing hundreds of buckets. Be mindful of the re-sync overhead.
-We're working on a more elegant solution. This guide will be updated when it's ready.
\ No newline at end of file
+We're working on a more elegant solution. This guide will be updated when it's ready.
diff --git a/sync/overview.mdx b/sync/overview.mdx
index 7146cff5..71bd078b 100644
--- a/sync/overview.mdx
+++ b/sync/overview.mdx
@@ -1,90 +1,87 @@
---
-title: "Sync Rules & Sync Streams"
+title: "Sync Streams and Sync Rules"
sidebarTitle: "Overview"
+description: PowerSync Sync Streams and the legacy Sync Rules allow developers to control which data syncs to which clients/devices (i.e. they enable partial sync).
---
-PowerSync Sync Rules and Sync Streams allow developers to control which data gets synchronized to which clients/devices (i.e. they enable dynamic partial replication).
+## Sync Streams (Beta) — Recommended
-## Sync Rules (GA/Stable)
+[Sync Streams](/sync/streams/overview) are now in beta and considered production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to [migrate](/sync/streams/migration). Sync Streams are designed to give developers flexibility to either dynamically sync data on-demand, or to "sync data upfront" for offline-first use cases.
-Sync Rules is the current generally-available / stable approach to use, that is production-ready:
+Key improvements in Sync Streams over legacy Sync Rules include:
+- **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters, on-demand. You still have the option of auto-subscribing streams when a client connects, for "sync data upfront" behavior.
+- **Temporary caching-like behavior**: Each subscription includes a configurable TTL that keeps data active after the client unsubscribes, acting as a warm cache for re-subscribing.
+- **Simpler developer experience**: Simplified syntax and mental model, and capabilities such as your UI components automatically managing subscriptions (for example, React hooks).
-
-
+
-## Sync Streams (Early Alpha)
+## Sync Rules (Legacy)
-[Sync Streams](/sync/streams/overview) are now available in early alpha! Sync Streams will eventually replace Sync Rules and are designed to allow for more dynamic on-demand syncing, while not compromising on the "sync data upfront" strengths of PowerSync for offline-first architecture use cases.
+Sync Rules is the legacy approach for controlling data sync. It remains available and supported for existing projects:
-Key improvements in Sync Streams over Sync Rules include:
-- **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters, on-demand. You still have the option of auto-subscribing streams when a client connects, for "sync data upfront" behavior.
-- **Temporary caching-like behavior**: Each subscription includes a configurable TTL that keeps data active after the client unsubscribes, acting as a warm cache for re-subscribing.
-- **Simpler developer experience**: Simplified syntax and mental model, and capabilities such as your UI components automatically managing subscriptions (for example, React hooks).
-
-We encourage you to explore Sync Streams, and once they're in Beta, migrating existing projects:
+
-
-
+If you're currently using Sync Rules and want to migrate to Sync Streams, see our [migration docs](/sync/streams/migration).
## How It Works
You may also find it useful to look at the [PowerSync Service architecture](/architecture/powersync-service) for background.
-Each [PowerSync Service](/architecture/powersync-service) instance has a deployed _Sync Rules_ or _Sync Streams_ configuration. This takes the form of a YAML file which contains:
-- **In the case of Sync Rules:** Definitions of the different [buckets](/architecture/powersync-service#bucket-system) that exist, with SQL-like queries to specify the parameters used by each bucket (if any), as well as the data contained in each bucket.
+Each [PowerSync Service](/architecture/powersync-service) instance has a deployed _Sync Streams_ (or legacy _Sync Rules_) configuration. This takes the form of a YAML file which contains:
- **In the case of Sync Streams:** Definitions of the streams that exist, with a SQL-like query (which can also contain limited subqueries), which defines the data in the stream, and references the necessary parameters.
+- **In the case of Sync Rules:** Definitions of the different [buckets](/architecture/powersync-service#bucket-system) that exist, with SQL-like queries to specify the parameters used by each bucket (if any), as well as the data contained in each bucket.
-A _parameter_ is a value that can be used in Sync Rules or Streams to create dynamic sync behavior for each user/client. Each client syncs only the relevant [_buckets_](/architecture/powersync-service#bucket-system) based on the parameters for that client.
-* Sync Rules can make use of _authentication parameters_ from the JWT token (such as the user ID or other JWT claims), as well [_client parameters_](/sync/rules/client-parameters) (passed directly from the client when it connects to the PowerSync Service).
-* Sync Streams can similarly make use of _authentication parameters_ from the JWT token, _connection parameters_ (the equivalent of _client parameters_, specified at connection), and _subscription parameters_ (specified by the client when it subscribes to a stream at any time). See details [here](/sync/streams/overview#accessing-parameters).
+A _parameter_ is a value that can be used in Sync Streams (or legacy Sync Rules) to create dynamic sync behavior for each user/client. Each client syncs only the relevant [_buckets_](/architecture/powersync-service#bucket-system) based on the parameters for that client.
+* Sync Streams can make use of _authentication parameters_ from the JWT token (such as the user ID or other JWT claims), _connection parameters_ (specified at connection), and _subscription parameters_ (specified by the client when it subscribes to a stream at any time). See [Using Parameters](/sync/streams/parameters).
+* Sync Rules can make use of _authentication parameters_ from the JWT token, as well as [_client parameters_](/sync/rules/client-parameters) (passed directly from the client when it connects to the PowerSync Service).
-It is also possible to have buckets with no parameters. These sync to all users/clients and we refer to them as "Global Buckets".
+It is also possible to have buckets/streams with no parameters. In the case of Sync Rules, these buckets sync to all users/clients automatically.
The concept of _buckets_ is core to PowerSync and key to its performance and scalability. The [PowerSync Service architecture overview](/architecture/powersync-service) provides more background on this.
-* In the _Sync Rules_ system, buckets and their parameters are [explicitly defined](/sync/rules/overview#bucket-definition).
-* In our new _Sync Streams_ system which is in early alpha, buckets and parameters are not explicitly defined, and are instead implicit based on the streams, their queries and subqueries.
+* In _Sync Streams_, buckets and parameters are implicit — they are automatically created based on the streams, their queries and subqueries. You don't need to explicitly define the buckets that exist.
+* In legacy _Sync Rules_, buckets and their parameters are [explicitly defined](/sync/rules/overview#bucket-definition).
-There are limitations on the SQL syntax and functionality that is supported in the Sync Rules and Sync Streams. For Sync Rules, details and limitations are documented at [Supported SQL](/sync/rules/supported-sql).
+There are limitations on the SQL syntax and functionality that is supported in Sync Streams and Sync Rules. See [Supported SQL](/sync/supported-sql) for details and limitations.
-In addition to filtering data based on parameters, Sync Rules and Sync Streams also enable:
+In addition to filtering data based on parameters, Sync Streams and Sync Rules also enable:
* Selecting only specific tables/collections and columns/fields to sync.
* Filtering data based on static conditions.
* Transforming column/field names and values.
-### Sync Rules/Streams Determine Replication From the Source Database
+### Sync Streams/Rules Determine Replication From the Source Database
-A PowerSync Service instance [replicates and transforms](/architecture/powersync-service#replication-from-the-source-database) relevant data from the backend source database according to the deployed Sync Rules or Sync Streams. During replication, data and metadata is persisted in [buckets](/architecture/powersync-service#bucket-system) on the PowerSync Service. Buckets are incrementally updated so that they contain the latest state as well as a history of changes (operations). This is key to how PowerSync achieves efficient delta syncing — having the operation history for each bucket allows clients to sync only the deltas that they need to get up to date (see [Protocol](/architecture/powersync-protocol#protocol) for more details).
+A PowerSync Service instance [replicates and transforms](/architecture/powersync-service#replication-from-the-source-database) relevant data from your backend source database according to your Sync Streams (or legacy Sync Rules). During replication, data and metadata are persisted in [buckets](/architecture/powersync-service#bucket-system) on the PowerSync Service. Buckets are incrementally updated so that they contain the latest state as well as a history of changes (operations). This is key to how PowerSync achieves efficient delta syncing — having the operation history for each bucket allows clients to sync only the deltas that they need to get up to date (see [Protocol](/architecture/powersync-protocol#protocol) for more details).
As a practical example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be embedded in the JWT). Now let's say users with IDs `A` and `B` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with IDs `user_todo_lists["A"]` and `user_todo_lists["B"]`. When the user with ID `A` connects, they can efficiently sync just the bucket with ID `user_todo_lists["A"]`.
-
+
-### Sync Rules/Streams Determine Real-Time Streaming Sync to Clients
+### Sync Streams/Rules Determine Real-Time Streaming Sync to Clients
-Whenever buckets are updated (buckets added or removed, or operations added to existing buckets), these changes are [streamed in real-time](/architecture/powersync-service#streaming-sync) to clients based on the Sync Rules/Streams.
+Whenever buckets are updated (buckets added or removed, or operations added to existing buckets), these changes are [streamed in real-time](/architecture/powersync-service#streaming-sync) to clients based on the Sync Streams (or legacy Sync Rules).
-This syncing behavior can be highly dynamic: in the case of Sync Rules, syncing will dynamically adjust based on changes in _client parameters_ and _authentication parameters_, and in the case of Sync Streams, syncing will dynamically adjust based on the stream subscriptions (which can make use of _subscription parameters_), as well as _connection parameters_ and _authentication parameters_ (from the JWT).
+This syncing behavior can be highly dynamic: in the case of Sync Streams, syncing will dynamically adjust based on the stream subscriptions (which can make use of _subscription parameters_), as well as _connection parameters_ and _authentication parameters_ (from the JWT). In the case of Sync Rules, syncing will dynamically adjust based on changes in _client parameters_ and _authentication parameters_.
-The bucket data is persisted in SQLite on the client-side, where it is easily queryable based on the [client-side schema](/intro/setup-guide#define-your-client-side-schema), which corresponds to the Sync Rules/Streams.
+The bucket data is persisted in SQLite on the client-side, where it is easily queryable based on the [client-side schema](/intro/setup-guide#define-your-client-side-schema), which corresponds to the Sync Streams/Rules.
For more information on the client-side SQLite database structure, see [Client Architecture](/architecture/client-architecture#client-side-schema-and-sqlite-database-structure).
-
-
+
+
diff --git a/sync/rules/client-parameters.mdx b/sync/rules/client-parameters.mdx
index 2d72655a..e001dc26 100644
--- a/sync/rules/client-parameters.mdx
+++ b/sync/rules/client-parameters.mdx
@@ -13,11 +13,11 @@ PowerSync already supports using **token parameters** in parameter queries. An e
**Client parameters** are specified directly by the client (i.e. not through the JWT authentication token). The advantage of client parameters is that they give client-side control over what data to sync, and can therefore be used to further filter or limit synced data. A common use case is [lazy-loading](/client-sdks/infinite-scrolling#2-control-data-sync-using-client-parameters), where data is split into pages and a client parameter can be used to specify which page(s) to sync to a user, and this can update dynamically as the user paginates (or reaches the end of an infinite-scrolling feed).
-
- [Sync Streams](/sync/streams/overview) make it easier to use client parameters, especially for apps where parameters are managed across different UI components and tabs.
+
+ [Sync Streams](/sync/streams/overview) make it easier to manage dynamic parameters, especially for apps where parameters are managed across different UI components and tabs. Sync Streams offer _subscription parameters_ (specified when subscribing to a stream) and _connection parameters_ (the equivalent of client parameters).
- For new apps that require client parameters, we recommend using [Sync Streams](/sync/streams/overview) (Early Alpha).
-
+ We recommend Sync Streams for new projects, and [migrating](/sync/streams/migration) existing projects.
+
### Usage
diff --git a/sync/rules/data-queries.mdx b/sync/rules/data-queries.mdx
index e436386f..c673cd93 100644
--- a/sync/rules/data-queries.mdx
+++ b/sync/rules/data-queries.mdx
@@ -18,7 +18,7 @@ Data Queries are used to group data into buckets, so each Data Query must use ev
## Supported SQL
-The supported SQL in Data Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/rules/supported-sql) for full details.
+The supported SQL in Data Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql) for full details.
## Examples
diff --git a/sync/advanced/many-to-many-and-join-tables.mdx b/sync/rules/many-to-many-join-tables.mdx
similarity index 80%
rename from sync/advanced/many-to-many-and-join-tables.mdx
rename to sync/rules/many-to-many-join-tables.mdx
index e03dada1..8a594c15 100644
--- a/sync/advanced/many-to-many-and-join-tables.mdx
+++ b/sync/rules/many-to-many-join-tables.mdx
@@ -1,10 +1,15 @@
---
title: "Guide: Many-to-Many and Join Tables"
sidebarTitle: "Many-to-Many and Join Tables"
+description: Strategies for handling many-to-many relationships in Sync Rules, which don't support JOINs directly.
---
Join tables are often used to implement many-to-many relationships between tables. Join queries are not directly supported in PowerSync Sync Rules, and require some workarounds depending on the use case. This guide contains some recommended strategies.
+
+**Using Sync Streams?** Sync Streams support [JOINs](/sync/streams/queries#using-joins) and [nested subqueries](/sync/streams/queries#using-subqueries), which handle most many-to-many relationships directly without the workarounds described here. See [Many-to-Many with Sync Streams](/sync/streams/examples#many-to-many-relationships) for examples.
+
+
**Postgres users:** For Postgres source databases, you can use the [`pg_ivm` extension](https://www.powersync.com/blog/using-pg-ivm-to-enable-joins-in-powersync) to create incrementally maintained materialized views with JOINs that can be referenced directly in Sync Rules. This approach avoids the need to denormalize your schema.
@@ -17,9 +22,7 @@ As an example, consider a social media application. The app has message boards.
-
+
```sql
create table users (
id uuid not null default gen_random_uuid (),
@@ -96,10 +99,11 @@ The relationship between users and boards is a many-to-many, specified via the `
To start with, in our PowerSync Sync Rules, we define a [bucket](/sync/rules/organize-data-into-buckets) and sync the posts. The [parameter query](/sync/rules/parameter-queries) is defined using the `board_subscriptions` table:
```yaml
+bucket_definitions:
board_data:
- parameters: select board_id from board_subscriptions where user_id = request.user_id()
+ parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- - select * from posts where board_id = bucket.board_id
+ - SELECT * FROM posts WHERE board_id = bucket.board_id
```
### Avoiding joins in data queries: Denormalize relationships (comments)
@@ -122,24 +126,26 @@ ALTER TABLE comments ADD CONSTRAINT comments_board_id_fkey FOREIGN KEY (board_id
Now we can add it to the bucket definition in our Sync Rules:
```yaml
+bucket_definitions:
board_data:
- parameters: select board_id from board_subscriptions where user_id = request.user_id()
+ parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- - select * from posts where board_id = bucket.board_id
+ - SELECT * FROM posts WHERE board_id = bucket.board_id
# Add comments:
- - select * from comments where board_id = bucket.board_id
+ - SELECT * FROM comments WHERE board_id = bucket.board_id
```
Now we want to sync topics of posts. In this case we added `board_id` from the start, so `post_topics` is simple in our Sync Rules:
```yaml
+bucket_definitions:
board_data:
- parameters: select board_id from board_subscriptions where user_id = request.user_id()
+ parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- - select * from posts where board_id = bucket.board_id
- - select * from comments where board_id = bucket.board_id
+ - SELECT * FROM posts WHERE board_id = bucket.board_id
+ - SELECT * FROM comments WHERE board_id = bucket.board_id
# Add post_topics:
- - select * from post_topics where board_id = bucket.board_id
+ - SELECT * FROM post_topics WHERE board_id = bucket.board_id
```
### Many-to-many strategy: Sync everything (topics)
@@ -149,9 +155,10 @@ Now we need access to sync the topics for all posts synced to the device. There
If the topics table is limited in size (say 1,000 or less), the simplest solution is to just sync all topics in our Sync Rules:
```yaml
+bucket_definitions:
global_topics:
data:
- - select * from topics where board_id = bucket.board_id
+ - SELECT * FROM topics
```
### Many-to-many strategy: Denormalize data (topics, user names)
@@ -175,14 +182,15 @@ ALTER TABLE board_subscriptions ADD COLUMN user_name text;
Sync Rules:
```yaml
+bucket_definitions:
board_data:
- parameters: select board_id from board_subscriptions where user_id = request.user_id()
+ parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
data:
- - select * from posts where board_id = bucket.board_id
- - select * from comments where board_id = bucket.board_id
- - select * from post_topics where board_id = bucket.board_id
+ - SELECT * FROM posts WHERE board_id = bucket.board_id
+ - SELECT * FROM comments WHERE board_id = bucket.board_id
+ - SELECT * FROM post_topics WHERE board_id = bucket.board_id
# Add subscriptions which include the names:
- - select * from board_subscriptions where board_id = bucket.board_id
+ - SELECT * FROM board_subscriptions WHERE board_id = bucket.board_id
```
### Many-to-many strategy: Array of IDs (user profiles)
@@ -198,21 +206,20 @@ ALTER TABLE users ADD COLUMN subscribed_board_ids uuid[];
By using an array instead of or in addition to a join table, we can use it directly in Sync Rules:
```yaml
-board_data:
- parameters: select board_id from board_subscriptions where user_id = request.user_id()
- data:
- - select * from posts where board_id = bucket.board_id
- - select * from comments where board_id = bucket.board_id
- - select * from post_topics where board_id = bucket.board_id
- # Add participating users:
- - select name, last_activity, profile_picture, bio from users where bucket.board_id in subscribed_board_ids
+bucket_definitions:
+ board_data:
+ parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id()
+ data:
+ - SELECT * FROM posts WHERE board_id = bucket.board_id
+ - SELECT * FROM comments WHERE board_id = bucket.board_id
+ - SELECT * FROM post_topics WHERE board_id = bucket.board_id
+ # Add participating users:
+ - SELECT name, last_activity, profile_picture, bio FROM users WHERE bucket.board_id IN subscribed_board_ids
```
This approach does require some extra effort to keep the array up to date. One option is to use a trigger in the case of Postgres:
-
+
```sql
CREATE OR REPLACE FUNCTION recalculate_subscribed_boards()
RETURNS TRIGGER AS $$
diff --git a/sync/rules/organize-data-into-buckets.mdx b/sync/rules/organize-data-into-buckets.mdx
index e5feba62..eb112342 100644
--- a/sync/rules/organize-data-into-buckets.mdx
+++ b/sync/rules/organize-data-into-buckets.mdx
@@ -43,7 +43,7 @@ bucket_definitions:
- The supported SQL in _Parameter Queries_ and _Data Queries_ is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/rules/supported-sql).
+ The supported SQL in _Parameter Queries_ and _Data Queries_ is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql).
diff --git a/sync/rules/overview.mdx b/sync/rules/overview.mdx
index 73db1f24..20f2ad80 100644
--- a/sync/rules/overview.mdx
+++ b/sync/rules/overview.mdx
@@ -1,15 +1,18 @@
---
-title: "Sync Rules"
+title: "Sync Rules (Legacy)"
sidebarTitle: "Overview & Key Concepts"
+description: Understand Sync Rules, the legacy mechanism for controlling data sync with explicit bucket definitions and parameter queries.
---
-PowerSync Sync Rules is the current generally-available/stable/production-ready mechanism to control which data gets synchronized to which clients/devices (i.e. they enable _dynamic partial replication_).
+PowerSync Sync Rules is the legacy mechanism to control which data gets synced to which clients/devices (i.e. they enable _partial sync_).
-
-**Sync Streams Available in Early Alpha**
+
+**Sync Streams Recommended**
-[Sync Streams](/sync/streams/overview) are now available in early alpha! Sync Streams will eventually replace Sync Rules and are designed to allow for more dynamic syncing, while not compromising on existing offline-first capabilities. See the [Overview](/sync/overview) page for more details.
-
+[Sync Streams](/sync/streams/overview) are now in beta and production-ready. We recommend Sync Streams for all new projects — they offer a simpler developer experience, on-demand syncing with subscription parameters, and caching-like behavior with TTL.
+
+Existing projects should [migrate to Sync Streams](/sync/streams/migration). Sync Rules remain supported but are considered legacy.
+
Sync Rules are defined in a YAML file. For PowerSync Cloud, they are edited and deployed to a specific PowerSync instance in the [PowerSync Dashboard](/tools/powersync-dashboard#project-&-instance-level). For self-hosting setups, they are defined as part of your [instance configuration](/configuration/powersync-service/self-hosted-instances).
@@ -49,7 +52,7 @@ The following values can be selected in Parameter Queries:
- **Client Parameters** (see below)
- **Values From a Table/Collection** (see below)
-See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples. Also see [Supported SQL](/sync/rules/supported-sql) for limitations.
+See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples. Also see [Supported SQL](/sync/supported-sql) for limitations.
### Authentication Parameters
@@ -69,7 +72,7 @@ Clients can specify **Client Parameters** when connecting to PowerSync (i.e. whe
```yaml Example of selecting a Client Parameter in a Parameter Query
parameters: SELECT (request.parameters() ->> 'current_project') as current_project
```
-The `->>` operator in the above example extracts a value from a string containing JSON (which is the format provided by ``request.parameters()``). See [Operators and Functions](/sync/rules/supported-sql#operators)
+The `->>` operator in the above example extracts a value from a string containing JSON (which is the format provided by ``request.parameters()``). See [Operators and Functions](/sync/supported-sql#operators)
A client can pass any value for a Client Parameter. Hence, Client Parameters should always be treated with care, and should [not be used](/sync/rules/client-parameters#security-consideration) for access control purposes.
@@ -100,7 +103,7 @@ data:
- SELECT * FROM lists WHERE owner_id = bucket.user_id
```
-See [Data Queries](/sync/rules/data-queries) for more details and examples. Also see [Supported SQL](/sync/rules/supported-sql) for limitations.
+See [Data Queries](/sync/rules/data-queries) for more details and examples. Also see [Supported SQL](/sync/supported-sql) for limitations.
### Global Buckets
@@ -112,7 +115,7 @@ If no **Parameter Query** is specified in the bucket definition, the bucket is a
## Potential Parameter Values Determine Created Buckets
-When your PowerSync Service instance [replicates data from your source database](/architecture/powersync-service#replication-from-the-source-database) based on your Sync Rules configuration (i.e. your bucket definitions), it finds all possible values for your defined parameters in the relevant tables/collections in your source database, and creates individual buckets based on those values.
+When your PowerSync Service instance [replicates data from your source database](/architecture/powersync-service#replication-from-the-source-database) based on your Sync Rules (i.e. your bucket definitions), it finds all possible values for your defined parameters in the relevant tables/collections in your source database, and creates individual buckets based on those values.
For example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be obtained from the JWT) to scope those to-do lists. Now let's say users with IDs `1`, `2` and `3` exist in the source database. PowerSync will then replicate data from the source database and preemptively create individual buckets with bucket IDs of `user_todo_lists["1"]`, `user_todo_lists["2"]` and `user_todo_lists["3"]`.
diff --git a/sync/rules/parameter-queries.mdx b/sync/rules/parameter-queries.mdx
index 97eae31d..c9280343 100644
--- a/sync/rules/parameter-queries.mdx
+++ b/sync/rules/parameter-queries.mdx
@@ -26,7 +26,7 @@ The following functions allow you to select Authentication Parameters in your Pa
| `request.user_id()` | Returns the JWT subject (`sub`). Same as `request.jwt() ->> 'sub'` (see below) |
| `request.jwt()` | Returns the entire (signed) JWT payload as a JSON string. If there are other _claims_ in your JWT (in addition to the user ID), you can select them from this JSON string. |
-Since `request.jwt()` is a string containing JSON, use the `->>` [operator](/sync/rules/supported-sql#operators) to select values from it:
+Since `request.jwt()` is a string containing JSON, use the `->>` [operator](/sync/supported-sql#operators) to select values from it:
```sql
request.jwt() ->> 'sub' -- the 'subject' of the JWT - same as `request.user_id()
@@ -118,7 +118,7 @@ bucket_definitions:
## Supported SQL
-The supported SQL in Parameter Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/rules/supported-sql) for full details.
+The supported SQL in Parameter Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql) for full details.
## Usage Examples
@@ -197,12 +197,12 @@ bucket_definitions:
Keep in mind that the total number of buckets per user should [remain limited](/sync/rules/organize-data-into-buckets#limit-on-number-of-buckets-per-client) (\<= 1,000 [by default](/resources/performance-and-limits)), so buckets should not be too granular.
-For more advanced details on many-to-many relationships and join tables, see [this guide](/sync/advanced/many-to-many-and-join-tables).
+For more advanced details on many-to-many relationships and join tables, see [this guide](/sync/rules/many-to-many-join-tables).
### Expanding JSON Array Into Multiple Parameters
-Using the `json_each()` [function](/sync/rules/supported-sql#functions) and `->` [operator](/sync/rules/supported-sql#operators), we can expand a parameter that is a JSON array into multiple rows, thereby filtering by multiple parameter values:
+Using the `json_each()` [function](/sync/supported-sql#functions) and `->` [operator](/sync/supported-sql#operators), we can expand a parameter that is a JSON array into multiple rows, thereby filtering by multiple parameter values:
```yaml
bucket_definitions:
diff --git a/sync/rules/supported-sql.mdx b/sync/rules/supported-sql.mdx
deleted file mode 100644
index ad2609f6..00000000
--- a/sync/rules/supported-sql.mdx
+++ /dev/null
@@ -1,73 +0,0 @@
----
-title: "Supported SQL"
----
-
-## Parameter Queries
-
-The supported SQL is based on a small subset of the SQL standard syntax.
-
-Notable features and restrictions:
-
-1. Only simple `SELECT` statements are supported.
-2. No `JOIN`, `GROUP BY` or other aggregation, `ORDER BY`, `LIMIT`, or subqueries are supported.
-3. For token parameters, only `=` operators are supported, and `IN` to a limited extent.
-4. A limited set of operators and functions are supported — see below.
-
-
-## Operators and Functions
-
-Operators and functions can be used to transform columns/fields before being synced to a client.
-
-When filtering on parameters (token or [client parameters](/sync/rules/client-parameters) in the case of [parameter queries](/sync/rules/parameter-queries), and bucket parameters in the case of [data queries](/sync/rules/data-queries)), operators can only be used in a limited way. Typically only `=` , `IN` and `IS NULL` are allowed on the parameters, and special limits apply when combining clauses with `AND`, `OR` or `NOT`.
-
-When transforming output columns/fields, or filtering on row/document values, those restrictions do not apply.
-
-If a specific operator or function is needed, please [contact us](/resources/contact-us) so that we can consider inclusion in our roadmap.
-
-Some fundamental restrictions on these operators and functions are:
-
-1. It must be deterministic — no random or time-based functions.
-2. No external state can be used.
-3. It must operate on data available within a single row/document. For example, no aggregation functions allowed.
-
-
-### Operators
-
-| Operator | Notes |
-| ------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
-| Comparison: `= != < > <= >=` | If either parameter is null, this evaluates to null. |
-| Null: `IS NULL`, `IS NOT NULL` | |
-| Mathematical: `+ - * /` | |
-| Logical: `AND`, `OR`, `NOT` | |
-| Cast: `CAST(x AS type)` `x :: type` | Cast to text, numeric, integer, real or blob. |
-| JSON: `json -> 'path'` `json ->> 'path'` | `->` Returns the value as a JSON string. `->>` Returns the extracted value. |
-| Text concatenation: `\|\|` | Joins two text values together. |
-| Arrays: ` IN ` | Returns true if the `left` value is present in the `right` JSON array. In data queries, only the `left` value may be a bucket parameter. In parameter queries, the `left` or `right` value may be a bucket parameter. Differs from the SQLite operator in that it can be used directly on a JSON array. |
-
-### Functions
-
-| Function | Description |
-| -------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| [upper(text)](https://www.sqlite.org/lang_corefunc.html#upper) | Convert text to upper case. |
-| [lower(text)](https://www.sqlite.org/lang_corefunc.html#lower) | Convert text to lower case. |
-| [substring(text, start, length)](https://sqlite.org/lang_corefunc.html#substr) | Extracts a portion of a string based on specified start index and length. Start index is 1-based. Example: `substring(created_at, 1, 10)` returns the date portion of the timestamp. |
-| [hex(data)](https://www.sqlite.org/lang_corefunc.html#hex) | Convert blob or text data to hexadecimal text. |
-| base64(data) | Convert blob or text data to base64 text. |
-| [length(data)](https://www.sqlite.org/lang_corefunc.html#length) | For text, return the number of characters. For blob, return the number of bytes. For null, return null. For integer and real, convert to text and return the number of characters. |
-| [typeof(data)](https://www.sqlite.org/lang_corefunc.html#typeof) | text, integer, real, blob or null |
-| [json\_each(data)](https://www.sqlite.org/json1.html#jeach) | Expands a JSON array or object from a request or token parameter into a set of parameter rows. Example: `SELECT value as project_id FROM json_each(request.jwt() -> 'project_ids'` |
-| [json\_extract(data, path)](https://www.sqlite.org/json1.html#jex) | Same as `->>` operator, but the path must start with `$.` |
-| [json\_array\_length(data)](https://www.sqlite.org/json1.html#jarraylen) | Given a JSON array (as text), returns the length of the array. If data is null, returns null. If the value is not a JSON array, returns 0. |
-| [json\_valid(data)](https://www.sqlite.org/json1.html#jvalid) | Returns 1 if the data can be parsed as JSON, 0 otherwise. |
-| json\_keys(data) | Returns the set of keys of a JSON object as a JSON array. Example: `select * from items where bucket.user_id in json_keys(permissions_json)` |
-| [ifnull(x,y)](https://www.sqlite.org/lang_corefunc.html#ifnull) | Returns x if non-null, otherwise returns y. |
-| [iif(x,y,z)](https://www.sqlite.org/lang_corefunc.html#iif) | Returns y if x is true, otherwise returns z. |
-| [uuid_blob(id)](https://sqlite.org/src/file/ext/misc/uuid.c) | Convert a UUID string to bytes. |
-| [unixepoch(datetime, \[modifier\])](https://www.sqlite.org/lang_datefunc.html) | Returns a datetime as Unix timestamp. If modifier is "subsec", the result is a floating point number, with milliseconds including in the fraction. The datetime argument is required - this function cannot be used to get the current time. |
-| [datetime(datetime, \[modifier\])](https://www.sqlite.org/lang_datefunc.html) | Returns a datetime as a datetime string, in the format YYYY-MM-DD HH:MM:SS. If the specifier is "subsec", milliseconds are also included. If the modifier is "unixepoch", the argument is interpreted as a unix timestamp. Both modifiers can be included: datetime(timestamp, 'unixepoch', 'subsec'). The datetime argument is required - this function cannot be used to get the current time. |
-| [ST\_AsGeoJSON(geometry)](https://postgis.net/docs/ST_AsGeoJSON.html) | Convert [PostGIS](https://postgis.net/) (in Postgres) geometry from WKB to GeoJSON. Combine with JSON operators to extract specific fields. |
-| [ST\_AsText(geometry)](https://postgis.net/docs/ST_AsText.html) | Convert [PostGIS](https://postgis.net/) (in Postgres) geometry from WKB to Well-Known Text (WKT). |
-| [ST\_X(point)](https://postgis.net/docs/ST_X.html) | Get the X coordinate of a [PostGIS](https://postgis.net/) point (in Postgres) |
-| [ST\_Y(point)](https://postgis.net/docs/ST_Y.html) | Get the Y coordinate of a [PostGIS](https://postgis.net/) point (in Postgres) |
-
-Most of these functions are based on the [built-in SQLite functions](https://www.sqlite.org/lang_corefunc.html) and [SQLite JSON functions](https://www.sqlite.org/json1.html).
diff --git a/sync/streams/client-usage.mdx b/sync/streams/client-usage.mdx
new file mode 100644
index 00000000..54745022
--- /dev/null
+++ b/sync/streams/client-usage.mdx
@@ -0,0 +1,463 @@
+---
+title: "Client-Side Usage"
+description: Subscribe to Sync Streams from your client app, manage subscriptions, and track sync progress.
+---
+
+After [defining your streams](/sync/streams/overview#defining-streams) on the server-side, your client app subscribes to them to start syncing data (this is an explicit operation unless streams are configured to [auto-subscribe](/sync/streams/overview#using-auto-subscribe)). This page covers everything you need to use Sync Streams from your client code.
+
+## Quick Start
+
+Streams that are configured to [auto-subscribe](/sync/streams/overview#using-auto-subscribe) will automatically start syncing as soon as you connect to your PowerSync instance in your client-side application.
+
+For any other streams, the basic pattern is: **subscribe** to a stream, **wait** for data to sync, then **unsubscribe** when done.
+
+
+
+```js
+// Subscribe to a stream with parameters
+const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
+
+// Wait for initial data to sync
+await sub.waitForFirstSync();
+
+// Your data is now available - query it normally
+const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', ['abc123']);
+
+// When leaving the screen or component...
+sub.unsubscribe();
+```
+
+
+
+```dart
+// Subscribe to a stream with parameters
+final sub = await db.syncStream('list_todos', {'list_id': 'abc123'}).subscribe();
+
+// Wait for initial data to sync
+await sub.waitForFirstSync();
+
+// Your data is now available - query it normally
+final todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', ['abc123']);
+
+// When leaving the screen or component...
+sub.unsubscribe();
+```
+
+
+
+```kotlin
+// Subscribe to a stream with parameters
+val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123")))
+ .subscribe()
+
+// Wait for initial data to sync
+sub.waitForFirstSync()
+
+// Your data is now available - query it normally
+val todos = database.getAll("SELECT * FROM todos WHERE list_id = ?", listOf("abc123"))
+
+// When leaving the screen or component...
+sub.unsubscribe()
+```
+
+
+
+```swift
+// Subscribe to a stream with parameters
+let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")]).subscribe()
+
+// Wait for initial data to sync
+try await sub.waitForFirstSync()
+
+// Your data is now available - query it normally
+let todos = try await db.getAll(sql: "SELECT * FROM todos WHERE list_id = ?", parameters: ["abc123"])
+
+// When leaving the screen or component...
+try await sub.unsubscribe()
+```
+
+
+
+```csharp
+// Subscribe to a stream with parameters
+var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }).Subscribe();
+
+// Wait for initial data to sync
+await sub.WaitForFirstSync();
+
+// Your data is now available - query it normally
+var todos = await db.GetAll("SELECT * FROM todos WHERE list_id = ?", new[] { "abc123" });
+
+// When leaving the screen or component...
+sub.Unsubscribe();
+```
+
+
+
+## Framework Integrations
+
+Most developers use framework-specific hooks that handle subscription lifecycle automatically.
+
+
+
+ The `useSyncStream` hook automatically subscribes when the component mounts and unsubscribes when it unmounts:
+ ```jsx
+ function TodoList({ listId }) {
+ // Automatically subscribes/unsubscribes based on component lifecycle
+ const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: listId } });
+
+ // Check if data has synced
+ if (!stream?.subscription.hasSynced) {
+ return ;
+ }
+
+ // Data is ready - query and render
+ const { data: todos } = useQuery('SELECT * FROM todos WHERE list_id = ?', [listId]);
+ return ;
+ }
+ ```
+
+ You can also have `useQuery` wait for a stream before running:
+
+ ```jsx
+ // This query waits for the stream to sync before executing
+ const { data: todos } = useQuery(
+ 'SELECT * FROM todos WHERE list_id = ?',
+ [listId],
+ { streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] }
+ );
+ ```
+
+
+ Both the `useQuery` and `useQueries` hooks automatically subscribe when the component mounts and will unsubscribe when it unmounts:
+ ```jsx
+ function TodoList({ listId }) {
+ // Automatically subscribes/unsubscribes based on component lifecycle
+ const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: listId } });
+ const { data: todos, isLoading } = useQuery({
+ queryKey: ['test'],
+ query: 'SELECT 1',
+ streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }]
+ });
+
+ // Check if data has synced
+ if (isLoading) {
+ return ;
+ }
+
+ // Data is ready - query and render
+ return ;
+ }
+ ```
+
+ ```jsx
+ function TodoList({ listId }) {
+ // Automatically subscribes/unsubscribes based on component lifecycle
+ const { allData, anyPending} = useQueries({
+ queries: [
+ { queryKey: ['test1'], query: 'SELECT 1', streams: [{ name: 'a' }] },
+ { queryKey: ['test2'], query: 'SELECT 2' }
+ ],
+ combine: (results) => ({
+ allData: results.map((r) => r.data),
+ anyPending: results.some((r) => r.isPending)
+ })
+ })}
+ ...
+ }
+ ```
+
+
+The `useSyncStream` composable automatically subscribes when the component mounts and unsubscribes when it unmounts:
+ ```vue
+
+````
+
+You can also have `useQuery` wait for a stream before running:
+
+```Javascript
+// This query waits for the stream to sync before executing
+const { data: todos } = useQuery(
+ 'SELECT * FROM todos WHERE list_id = ?',
+ [listId],
+ { streams: [
+ { name: 'list_todos',
+ parameters: { list_id: listId },
+ waitForStream: true
+ }
+ ]
+ }
+);
+````
+
+
+
+
+## Checking Sync Status
+
+You can check whether a subscription has synced and monitor download progress:
+
+
+
+```js
+const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
+
+// Check if this subscription has completed initial sync
+const status = db.currentStatus.forStream(sub);
+console.log(status?.subscription.hasSynced); // true/false
+console.log(status?.progress); // download progress
+```
+
+
+
+```dart
+final sub = await db.syncStream('list_todos', {'list_id': 'abc123'}).subscribe();
+
+// Check if this subscription has completed initial sync
+final status = db.currentStatus.forStream(sub);
+print(status?.subscription.hasSynced); // true/false
+print(status?.progress); // download progress
+```
+
+
+
+```kotlin
+val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123")))
+ .subscribe()
+
+// Check if this subscription has completed initial sync
+val status = database.currentStatus.forStream(sub)
+println(status?.subscription?.hasSynced) // true/false
+println(status?.progress) // download progress
+```
+
+
+
+```swift
+let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")]).subscribe()
+
+// Check if this subscription has completed initial sync
+let status = db.currentStatus.forStream(stream: sub)
+print(status?.subscription.hasSynced ?? false) // true/false
+print(status?.progress) // download progress
+```
+
+
+
+```csharp
+var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }).Subscribe();
+
+// Check if this subscription has completed initial sync
+var status = db.CurrentStatus.ForStream(sub);
+Console.WriteLine(status?.Subscription.HasSynced); // true/false
+Console.WriteLine(status?.Progress); // download progress
+```
+
+
+
+## TTL (Time-To-Live)
+
+TTL controls how long data remains cached after you unsubscribe. This enables "warm cache" behavior — when users navigate back to a screen, data may already be available without waiting for a sync.
+
+**Default behavior:** Data is cached for 24 hours after unsubscribing. For most apps, this default works well.
+
+### Setting a Custom TTL
+
+
+
+```js
+// Cache for 1 hour after unsubscribe (TTL in seconds)
+const sub = await db.syncStream('todos', { list_id: 'abc' })
+ .subscribe({ ttl: 3600 });
+
+// Cache indefinitely (data never expires)
+const sub = await db.syncStream('todos', { list_id: 'abc' })
+ .subscribe({ ttl: Infinity });
+
+// No caching (remove data immediately on unsubscribe)
+const sub = await db.syncStream('todos', { list_id: 'abc' })
+ .subscribe({ ttl: 0 });
+```
+
+
+
+```dart
+// Cache for 1 hour after unsubscribe
+final sub = await db.syncStream('todos', {'list_id': 'abc'})
+ .subscribe(ttl: const Duration(hours: 1));
+
+// Cache for 7 days
+final sub = await db.syncStream('todos', {'list_id': 'abc'})
+ .subscribe(ttl: const Duration(days: 7));
+```
+
+
+
+```kotlin
+// Cache for 1 hour after unsubscribe
+val sub = database.syncStream("todos", mapOf("list_id" to JsonParam.String("abc")))
+ .subscribe(ttl = 1.hours)
+
+// Cache for 7 days
+val sub = database.syncStream("todos", mapOf("list_id" to JsonParam.String("abc")))
+ .subscribe(ttl = 7.days)
+```
+
+
+
+```swift
+// Cache for 1 hour after unsubscribe (TTL in seconds)
+let sub = try await db.syncStream(name: "todos", params: ["list_id": JsonValue.string("abc")])
+ .subscribe(ttl: 60 * 60, priority: nil)
+
+// Cache for 7 days
+let sub = try await db.syncStream(name: "todos", params: ["list_id": JsonValue.string("abc")])
+ .subscribe(ttl: 60 * 60 * 24 * 7, priority: nil)
+```
+
+
+
+```csharp
+// Cache for 1 hour after unsubscribe
+var sub = await db.SyncStream("todos", new() { ["list_id"] = "abc" })
+ .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) });
+
+// Cache for 7 days
+var sub = await db.SyncStream("todos", new() { ["list_id"] = "abc" })
+ .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromDays(7) });
+```
+
+
+
+### How TTL Works
+
+- **Per-subscription**: Each `(stream name, parameters)` pair has its own TTL.
+- **First subscription wins**: If you subscribe to the same stream with the same parameters multiple times, the TTL from the first subscription is used.
+- **After unsubscribe**: Data continues syncing for the TTL duration, then is removed from the client-side SQLite database.
+
+```js
+// Example: User opens two lists with different TTLs
+const subA = await db.syncStream('todos', { list_id: 'A' }).subscribe({ ttl: 43200 }); // 12h
+const subB = await db.syncStream('todos', { list_id: 'B' }).subscribe({ ttl: 86400 }); // 24h
+
+// Each subscription is independent
+// List A data cached for 12h after unsubscribe
+// List B data cached for 24h after unsubscribe
+```
+
+## Priority Override
+
+Streams can have a default priority set in the YAML sync configuration (see [Prioritized Sync](/sync/advanced/prioritized-sync)). When subscribing, you can override this priority for a specific subscription:
+```js
+// Override the stream's default priority
+const sub = await db.syncStream('todos', { list_id: 'abc' }).subscribe({ priority: 1 });
+```
+
+When different components subscribe to the same stream with the same parameters but different priorities, PowerSync uses the highest priority for syncing. That higher priority is kept until the subscription ends (or its TTL expires). Subscriptions with different parameters are independent and do not conflict.
+
+## Connection Parameters
+
+Connection parameters are a more advanced feature for values that apply to all streams in a session. They're the Sync Streams equivalent of [Client Parameters](/sync/rules/client-parameters) in legacy Sync Rules.
+
+
+For most use cases, **subscription parameters** (passed when subscribing) are more flexible and recommended. Use connection parameters only when you need a single global value across all streams, like an environment flag.
+
+
+Define streams that use connection parameters:
+
+```yaml
+streams:
+ config:
+ auto_subscribe: true
+ query: SELECT * FROM config WHERE env = connection.parameter('environment')
+```
+
+Set connection parameters when connecting:
+
+
+
+```js
+await db.connect(connector, {
+ params: { environment: 'production' }
+});
+```
+
+
+
+```dart
+await db.connect(
+ connector: connector,
+ params: {'environment': 'production'},
+);
+```
+
+
+
+```kotlin
+database.connect(
+ connector,
+ params = mapOf("environment" to JsonParam.String("production"))
+)
+```
+
+
+
+```swift
+try await db.connect(
+ connector: connector,
+ options: ConnectOptions(params: ["environment": JsonValue.string("production")])
+)
+```
+
+
+
+```csharp
+await db.Connect(connector, new ConnectOptions {
+ Params = new() { ["environment"] = "production" }
+});
+```
+
+
+
+## API Reference
+
+For quick reference, here are the key methods available in each SDK:
+
+| Method | Description |
+|--------|-------------|
+| `db.syncStream(name, params)` | Get a `SyncStream` instance for a stream with optional parameters |
+| `stream.subscribe(options)` | Subscribe to the stream. Returns a `SyncStreamSubscription` |
+| `subscription.waitForFirstSync()` | Wait until the subscription has completed its initial sync |
+| `subscription.unsubscribe()` | Unsubscribe from the stream (data [remains cached](/sync/streams/client-usage#how-ttl-works) for TTL duration) |
+| `db.currentStatus.forStream(sub)` | Get sync status and progress for a subscription |
diff --git a/sync/streams/ctes.mdx b/sync/streams/ctes.mdx
new file mode 100644
index 00000000..ad01a44c
--- /dev/null
+++ b/sync/streams/ctes.mdx
@@ -0,0 +1,195 @@
+---
+title: "Common Table Expressions (CTEs)"
+description: Reuse common query patterns within a stream using CTEs to simplify configurations and improve efficiency.
+---
+
+When a stream needs reusable filtering logic, you can define it once in a Common Table Expression (CTE) and reuse it in that stream's queries. This keeps stream definitions DRY and makes it easier to maintain. For the supported syntax of the `with` block and CTE rules, see [Supported SQL — CTE and WITH syntax](/sync/supported-sql#cte-and-with-syntax).
+
+## Why Use CTEs
+
+Consider an app where users belong to organizations. Several tables need to filter by the user's organizations:
+
+```yaml
+# Without CTEs - repetitive and hard to maintain
+streams:
+ org_projects:
+ query: |
+ SELECT * FROM projects
+ WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id())
+
+ org_repositories:
+ query: |
+ SELECT * FROM repositories
+ WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id())
+
+ org_settings:
+ query: |
+ SELECT * FROM settings
+ WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id())
+```
+
+The same subquery appears three times. You can merge these into one stream and define the logic once using a CTE:
+
+```yaml
+# With a CTE and multiple queries
+streams:
+ org_data:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ queries:
+ - SELECT * FROM projects WHERE org_id IN user_orgs
+ - SELECT * FROM repositories WHERE org_id IN user_orgs
+ - SELECT * FROM settings WHERE org_id IN user_orgs
+```
+
+If the membership logic changes, you update it in one place.
+
+## Defining CTEs
+
+Define CTEs in a `with` block inside a stream. Each CTE has a name and a `SELECT` query:
+
+```yaml
+streams:
+ my_stream:
+ with:
+ cte_name: SELECT columns FROM table WHERE conditions
+ query: SELECT * FROM some_table WHERE col IN cte_name
+```
+
+The CTE query can include any filtering logic, including parameters:
+
+```yaml
+streams:
+ user_data:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ active_projects: SELECT id FROM projects WHERE archived = false
+ query: SELECT * FROM projects WHERE org_id IN user_orgs AND id IN active_projects
+```
+
+## Using CTEs in Queries
+
+Once defined in a stream's `with` block, use the CTE name in that stream's query or queries. You can use it like a subquery or join it as if it were a table.
+
+**Short-hand syntax** (when the CTE has exactly one column):
+
+```yaml
+streams:
+ projects:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ query: SELECT * FROM projects WHERE org_id IN user_orgs
+```
+
+The short-hand `IN cte_name` is equivalent to `IN (SELECT * FROM cte_name)`. If the CTE has more than one column, this form is an error; use explicit subquery or join syntax instead.
+
+**Explicit subquery syntax** (when you need to select specific columns):
+
+```yaml
+streams:
+ projects:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ query: SELECT * FROM projects WHERE org_id IN (SELECT org_id FROM user_orgs)
+```
+
+**Join syntax** (you can join a CTE as if it were a table):
+
+```yaml
+streams:
+ projects:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ query: SELECT projects.* FROM projects, user_orgs WHERE user_orgs.org_id = projects.org_id
+```
+
+## Combining with Multiple Queries
+
+CTEs work well with the `queries` feature (multiple queries per stream). This lets you share the CTE and keep all query results in one stream: the client only needs to manage one subscription instead of multiple.
+
+```yaml
+streams:
+ user_data:
+ with:
+ my_org: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ queries:
+ - SELECT * FROM projects WHERE org_id IN my_org
+ - SELECT * FROM repositories WHERE org_id IN my_org
+ - SELECT * FROM team_members WHERE org_id IN my_org
+```
+
+## Complete Example
+
+A full configuration using CTEs. Each stream that needs shared logic defines its own `with` block:
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ # Organization-level data (auto-sync) - one stream with CTE and multiple queries
+ org_and_projects:
+ auto_subscribe: true
+ with:
+ user_orgs: |
+ SELECT org_id FROM org_memberships WHERE user_id = auth.user_id()
+ accessible_projects: |
+ SELECT id FROM projects
+ WHERE org_id IN user_orgs
+ OR id IN (SELECT project_id FROM project_shares WHERE shared_with = auth.user_id())
+ queries:
+ - SELECT * FROM organizations WHERE id IN user_orgs
+ - SELECT * FROM projects WHERE id IN accessible_projects
+
+ # Project details (on-demand) - same CTE and param, so one stream with multiple queries
+ project_details:
+ with:
+ accessible_projects: |
+ SELECT id FROM projects
+ WHERE org_id IN (SELECT org_id FROM org_memberships WHERE user_id = auth.user_id())
+ OR id IN (SELECT project_id FROM project_shares WHERE shared_with = auth.user_id())
+ queries:
+ - |
+ SELECT * FROM tasks
+ WHERE project_id = subscription.parameter('project_id')
+ AND project_id IN accessible_projects
+ - |
+ SELECT * FROM files
+ WHERE project_id = subscription.parameter('project_id')
+ AND project_id IN accessible_projects
+```
+
+## Limitations
+
+The following rules apply to CTEs. For the full syntax reference, see [Supported SQL — CTE and WITH syntax](/sync/supported-sql#cte-and-with-syntax).
+
+**Sync Streams do not support global CTEs.** Use a `with` block only inside a stream. To reuse logic across streams, define the same CTE (or equivalent subquery) in each stream that needs it, or combine streams using [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) so one stream can share a single CTE across queries.
+
+**CTEs cannot reference other CTEs.** Each CTE must be self-contained:
+
+```yaml
+# This won't work - cte2 cannot reference cte1
+streams:
+ my_stream:
+ with:
+ cte1: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ cte2: SELECT id FROM projects WHERE org_id IN cte1 # Error!
+```
+
+If you need to chain filters, use nested subqueries in your stream query instead:
+
+```yaml
+streams:
+ tasks:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ query: |
+ SELECT * FROM tasks
+ WHERE project_id IN (
+ SELECT id FROM projects WHERE org_id IN user_orgs
+ )
+```
+
+**The short-hand `IN cte_name` works only when the CTE has exactly one column.** If the CTE has multiple columns, use explicit subquery syntax or join the CTE as a table.
+
+**CTE names take precedence over table/collection names.** If you define a CTE with the same name as a database table/collection, the CTE will be used. Choose distinct names to avoid confusion.
diff --git a/sync/streams/examples.mdx b/sync/streams/examples.mdx
new file mode 100644
index 00000000..2ea52e2b
--- /dev/null
+++ b/sync/streams/examples.mdx
@@ -0,0 +1,463 @@
+---
+title: "Examples, Patterns & Demos"
+description: Common patterns, use case examples, and working demo apps for Sync Streams.
+sidebarTitle: "Examples & Demos"
+---
+
+## Common Patterns
+
+These patterns show how to combine Sync Streams features to solve common real-world scenarios.
+
+### Organization-Scoped Data
+
+For apps where users belong to an organization (or company, team, workspace, etc.), use JWT claims to scope data. The `org_id` in the JWT ensures users only see data from their organization, without needing to pass it from the client.
+
+```yaml
+streams:
+ # All projects in the user's organization (auto-sync on connect)
+ org_projects:
+ auto_subscribe: true
+ query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id')
+
+ # Tasks for a specific project (sync on-demand)
+ project_tasks:
+ query: |
+ SELECT * FROM tasks
+ WHERE project_id = subscription.parameter('project_id')
+ AND project_id IN (SELECT id FROM projects WHERE org_id = auth.parameter('org_id'))
+```
+
+Your backend should include the `org_id` in the JWT payload when issuing tokens — e.g. `{ "sub": "user-123", "org_id": "org-456" }`. Clients auto-subscribe to `org_projects` when they connect, so the project list is available offline immediately. Subscribe to `project_tasks` when the user opens a project:
+
+```js
+// When the user opens a project view
+const sub = await db.syncStream('project_tasks', { project_id: projectId }).subscribe();
+await sub.waitForFirstSync();
+
+// Unsubscribe when the user navigates away
+sub.unsubscribe();
+```
+
+For more complex organization structures where users can belong to multiple organizations, see [Expanding JSON Arrays](/sync/streams/parameters#expanding-json-arrays).
+
+### Role-Based Access
+
+When different users should see different data based on their role, use JWT claims to apply visibility rules. This keeps authorization logic on the server side where it's secure.
+
+```yaml
+streams:
+ # Admins see all articles, others see only published or their own
+ articles:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM articles
+ WHERE org_id = auth.parameter('org_id')
+ AND (
+ status = 'published'
+ OR author_id = auth.user_id()
+ OR auth.parameter('role') = 'admin'
+ )
+```
+
+Your backend should include both `org_id` and `role` in the JWT — e.g. `{ "sub": "user-123", "org_id": "org-456", "role": "admin" }`. The `role` claim is set by your backend so users can't escalate their own privileges. In this example, Clients auto-subscribe to `articles` when they connect — no client-side subscription call needed.
+
+### Shared Resources
+
+For apps where users can share items with each other (like documents or folders), combine ownership checks with a "shares table" lookup. This syncs both items the user owns and items others have shared with them.
+
+```yaml
+streams:
+ my_documents:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM documents
+ WHERE owner_id = auth.user_id()
+ OR id IN (SELECT document_id FROM document_shares WHERE shared_with = auth.user_id())
+```
+
+Clients auto-subscribe to `my_documents` when they connect, so the user's documents (owned and shared) are available immediately.
+
+### Syncing Related Data
+
+When a detail view needs data from multiple tables (like an issue and its comments), use a [CTE](/sync/streams/ctes) and [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to define the authorization check once and sync both tables in one subscription.
+
+```yaml
+streams:
+ issue_with_comments:
+ with:
+ my_projects: SELECT project_id FROM project_members WHERE user_id = auth.user_id()
+ queries:
+ - |
+ SELECT * FROM issues
+ WHERE id = subscription.parameter('issue_id')
+ AND project_id IN my_projects
+ - |
+ SELECT comments.* FROM comments
+ INNER JOIN issues ON comments.issue_id = issues.id
+ WHERE comments.issue_id = subscription.parameter('issue_id')
+ AND issues.project_id IN my_projects
+```
+
+Subscribe once when the user opens an issue:
+
+```js
+// When the user opens an issue view
+const issueSub = await db.syncStream('issue_with_comments', { issue_id: issueId }).subscribe();
+
+await issueSub.waitForFirstSync();
+
+// Unsubscribe when the user navigates away
+issueSub.unsubscribe();
+```
+
+
+If multiple streams share the same filtering logic, consider using [CTEs](/sync/streams/ctes) to avoid repetition and [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) so the client only needs to manage one subscription instead of multiple. This is more efficient and results in fewer sync buckets.
+
+
+### User's Default or Primary Item
+
+When users have a "default" or "primary" item stored in their profile, you can sync related data automatically without the client needing to know the ID upfront.
+
+```yaml
+streams:
+ # Sync todos from the user's primary list
+ primary_list_todos:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM todos
+ WHERE list_id IN (
+ SELECT primary_list_id FROM users WHERE id = auth.user_id()
+ )
+```
+
+The subquery looks up the user's `primary_list_id` from the `users` table, then syncs all `todos` from that list. When the user changes their primary list in the database, the synced data updates automatically. Clients auto-subscribe to `primary_list_todos` when they connect — no client-side subscription call needed.
+
+### Hierarchical Data
+
+When your data has parent-child relationships across multiple levels, you can traverse the hierarchy using nested subqueries or joins. This is common in apps where access to child records is determined by membership at a higher level.
+
+For example, consider an app with organizations, projects, and tasks. Users belong to organizations, and should see all tasks in projects that belong to their organizations:
+
+```
+Organization → Projects → Tasks
+ ↑
+User membership
+```
+
+**Using nested subqueries:**
+
+```yaml
+streams:
+ org_tasks:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM tasks
+ WHERE project_id IN (
+ SELECT id FROM projects WHERE org_id IN (
+ SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ )
+ )
+```
+
+The query reads from inside out: find the user's organizations, then find projects in those organizations, then find tasks in those projects.
+
+**Using joins** (often easier to read for deeply nested hierarchies):
+
+```yaml
+streams:
+ org_tasks:
+ auto_subscribe: true
+ query: |
+ SELECT tasks.* FROM tasks
+ INNER JOIN projects ON tasks.project_id = projects.id
+ INNER JOIN org_members ON projects.org_id = org_members.org_id
+ WHERE org_members.user_id = auth.user_id()
+```
+
+Both queries produce the same result. PowerSync handles these nested relationships efficiently, so you don't need to denormalize your database or add redundant foreign keys. Clients auto-subscribe to `org_tasks` when they connect — no client-side subscription call needed.
+
+### Many-to-Many Relationships
+
+Many-to-many relationships (like users subscribing to boards) typically use a join table. Sync Streams support `INNER JOIN`s, so you can traverse these relationships directly without denormalizing your schema.
+
+Consider a social app where users subscribe to message boards:
+
+```
+Users ←→ board_subscriptions ←→ Boards → Posts → Comments
+```
+
+```yaml
+streams:
+ # Posts from boards the user subscribes to
+ board_posts:
+ auto_subscribe: true
+ query: |
+ SELECT posts.* FROM posts
+ INNER JOIN board_subscriptions ON posts.board_id = board_subscriptions.board_id
+ WHERE board_subscriptions.user_id = auth.user_id()
+
+ # Comments on those posts (no denormalization needed)
+ board_comments:
+ auto_subscribe: true
+ query: |
+ SELECT comments.* FROM comments
+ INNER JOIN posts ON comments.post_id = posts.id
+ INNER JOIN board_subscriptions ON posts.board_id = board_subscriptions.board_id
+ WHERE board_subscriptions.user_id = auth.user_id()
+
+ # User profiles for co-subscribers (people who share a board with me)
+ board_users:
+ auto_subscribe: true
+ query: |
+ SELECT users.* FROM users
+ INNER JOIN board_subscriptions ON users.id = board_subscriptions.user_id
+ WHERE board_subscriptions.board_id IN (
+ SELECT board_id FROM board_subscriptions WHERE user_id = auth.user_id()
+ )
+```
+
+Clients auto-subscribe to all three streams when they connect. Each query joins through `board_subscriptions` to find relevant data: posts in the user's boards, comments on those posts, and other users sharing those boards.
+
+Unlike with legacy [Sync Rules](/sync/rules/many-to-many-join-tables), you don't need to denormalize your schema or maintain array columns to handle these relationships.
+
+## Use Case Examples
+
+Complete configurations for common application types.
+
+### To-do List App
+
+Sync the list of `lists` upfront, but only sync `todos` when the user opens a specific list:
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ # Always available - user can see their lists offline
+ lists:
+ auto_subscribe: true
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+
+ # Loaded on demand - only sync todos for the list being viewed
+ list_todos:
+ query: |
+ SELECT * FROM todos
+ WHERE list_id = subscription.parameter('list_id')
+ AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+```
+
+Clients auto-subscribe to `lists` when they connect. Subscribe to `list_todos` when the user opens a list:
+
+```js
+// Lists are already synced (auto_subscribe: true)
+const lists = await db.getAll('SELECT * FROM lists');
+
+// When user opens a list
+const sub = await db.syncStream('list_todos', { list_id: selectedListId }).subscribe();
+await sub.waitForFirstSync();
+
+// Todos are now available locally
+const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', [selectedListId]);
+
+// Unsubscribe when user navigates back to the list overview
+sub.unsubscribe();
+```
+
+### Chat Application
+
+Chat apps typically have many conversations but users only view one at a time. Sync the conversation list upfront so users can see all their chats immediately, but load messages on-demand to avoid syncing potentially thousands of messages across all conversations.
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ # User's conversations - always show the conversation list
+ my_conversations:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM conversations
+ WHERE id IN (SELECT conversation_id FROM participants WHERE user_id = auth.user_id())
+
+ # Messages - only load for the active conversation
+ conversation_messages:
+ query: |
+ SELECT * FROM messages
+ WHERE conversation_id = subscription.parameter('conversation_id')
+ AND conversation_id IN (
+ SELECT conversation_id FROM participants WHERE user_id = auth.user_id()
+ )
+```
+
+Clients auto-subscribe to `my_conversations` when they connect. Subscribe to `conversation_messages` when the user opens a conversation:
+
+```js
+// Conversations are already synced (auto_subscribe: true)
+const conversations = await db.getAll('SELECT * FROM conversations');
+
+// When user opens a conversation
+const sub = await db.syncStream('conversation_messages', {
+ conversation_id: conversationId
+}).subscribe();
+await sub.waitForFirstSync();
+
+// Unsubscribe when user closes the conversation
+sub.unsubscribe();
+```
+
+### Project Management App
+
+This example shows a multi-tenant project management app where users can access public projects or projects they're members of. Each stream that needs "accessible projects" defines a CTE in that stream (Sync Streams do not support a top-level `with` block).
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ # Organization data - always available
+ org_info:
+ auto_subscribe: true
+ query: SELECT * FROM organizations WHERE id = auth.parameter('org_id')
+
+ # All accessible projects - always available for navigation
+ projects:
+ auto_subscribe: true
+ with:
+ user_projects: |
+ SELECT id FROM projects
+ WHERE org_id = auth.parameter('org_id')
+ AND (is_public OR id IN (
+ SELECT project_id FROM project_members WHERE user_id = auth.user_id()
+ ))
+ query: SELECT * FROM projects WHERE id IN user_projects
+
+ # Project details - on demand when user opens a project (one CTE, multiple queries)
+ project_details:
+ with:
+ user_projects: |
+ SELECT id FROM projects
+ WHERE org_id = auth.parameter('org_id')
+ AND (is_public OR id IN (
+ SELECT project_id FROM project_members WHERE user_id = auth.user_id()
+ ))
+ queries:
+ - |
+ SELECT * FROM tasks
+ WHERE project_id = subscription.parameter('project_id')
+ AND project_id IN user_projects
+ - |
+ SELECT * FROM files
+ WHERE project_id = subscription.parameter('project_id')
+ AND project_id IN user_projects
+```
+
+Your backend should include `org_id` in the JWT — e.g. `{ "sub": "user-123", "org_id": "org-456" }`. Clients auto-subscribe to `org_info` and `projects` when they connect. Subscribe to project details when the user opens a project:
+
+```js
+// org_info and projects are already synced (auto_subscribe: true)
+const projects = await db.getAll('SELECT * FROM projects');
+
+// When user opens a project
+const sub = await db.syncStream('project_details', { project_id: projectId }).subscribe();
+await sub.waitForFirstSync();
+
+// Unsubscribe when user navigates away
+sub.unsubscribe();
+```
+
+### Organization Workspace (Using Multiple Queries)
+
+When several tables share the same access pattern, you can group them into a single stream using multiple queries and a CTE. Sync is more efficient and the client only needs to manage one subscription instead of multiple.
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ # All org-level data syncs together in one stream
+ org_data:
+ auto_subscribe: true
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ queries:
+ - SELECT * FROM organizations WHERE id IN user_orgs
+ - SELECT * FROM projects WHERE org_id IN user_orgs
+ - SELECT * FROM team_members WHERE org_id IN user_orgs
+
+ # Project details - on demand. CTE includes subscription.parameter so queries stay simple.
+ project_details:
+ with:
+ selected_project: |
+ SELECT projects.id FROM projects
+ INNER JOIN org_members ON org_members.org_id = projects.org_id AND org_members.user_id = auth.user_id()
+ WHERE projects.id = subscription.parameter('project_id')
+ queries:
+ - SELECT * FROM tasks WHERE project_id IN selected_project
+ - SELECT * FROM files WHERE project_id IN selected_project
+ - SELECT * FROM comments WHERE project_id IN selected_project
+```
+
+The `user_orgs` CTE in `org_data` looks up org membership using `auth.user_id()`. In `project_details`, the CTE can include `subscription.parameter('project_id')` so it both authorizes (user must be in the project's org) and applies the selected project — the queries then just filter by `project_id IN selected_project`. Clients auto-subscribe to `org_data` when they connect. Subscribe to `project_details` when the user opens a project:
+
+```js
+// org_data is already synced (auto_subscribe: true)
+const projects = await db.getAll('SELECT * FROM projects');
+
+// When user opens a project
+const sub = await db.syncStream('project_details', { project_id: projectId }).subscribe();
+await sub.waitForFirstSync();
+
+// Unsubscribe when user navigates away
+sub.unsubscribe();
+```
+
+The `project_details` stream uses a [CTE](/sync/streams/ctes) and groups tasks, files, and comments for a specific project into a single subscription.
+
+## Demo Apps
+
+Working demo apps that demonstrate Sync Streams in action. These show how to combine auto-subscribe streams (for data that should always be available) with on-demand streams (for data loaded when needed).
+
+
+
+Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README.
+
+In this demo:
+- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior).
+- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters).
+- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior).
+
+
+Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which supports Sync Streams.
+
+Deploy the following Sync Streams:
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ lists:
+ auto_subscribe: true
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+ todos:
+ query: |
+ SELECT * FROM todos
+ WHERE list_id = subscription.parameter('list')
+ AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+```
+
+In this demo:
+- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior).
+- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters).
+- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior).
+
+
+Sync Streams support is available. Demo app coming soon.
+
+
+Sync Streams support is available. Demo app coming soon.
+
+
+Sync Streams support is available. Demo app coming soon.
+
+
diff --git a/sync/streams/migration.mdx b/sync/streams/migration.mdx
new file mode 100644
index 00000000..036c79d6
--- /dev/null
+++ b/sync/streams/migration.mdx
@@ -0,0 +1,255 @@
+---
+title: "Migrating from Sync Rules"
+description: How to migrate existing projects from legacy Sync Rules to Sync Streams.
+---
+
+import StreamDefinitionReference from '/snippets/stream-definition-reference.mdx';
+
+## Why Migrate?
+
+PowerSync's original Sync Rules system was optimized for offline-first use cases where you want to "sync everything upfront" when the client connects, so data is available locally if the user goes offline.
+
+However, many developers are building apps where users are mostly online, and you don't want to make users wait to sync a lot of data upfront. This is especially true for **web apps**: users are mostly online, you often want to sync only the data needed for the current page, and users frequently have multiple browser tabs open — each needing different subsets of data.
+
+### The Problem with Client Parameters
+
+[Client Parameters](/sync/rules/client-parameters) in Sync Rules partially support on-demand syncing — for example, using a `project_ids` array to sync only specific projects. However, manually managing these arrays across different browser tabs becomes painful:
+
+- You need to aggregate IDs across all open tabs
+- You need additional logic for different data types (tables)
+- If you want to keep data around after a tab closes (caching), you need even more management
+
+### How Sync Streams Solve This
+
+Sync Streams address these limitations:
+
+1. **On-demand syncing**: Define streams once, then subscribe from your app one or more times with different parameters. No need to manage arrays of IDs — each subscription is independent.
+
+2. **Multi-tab support**: Each subscription manages its own lifecycle. Open the same list in two tabs? Each tab subscribes independently. Close one? The other keeps working.
+
+3. **Built-in caching**: Each subscription has a configurable `ttl` that keeps data cached after unsubscribing. When users return to a screen, data may already be available — no loading state needed.
+
+4. **Simpler, more powerful syntax**: Queries with subqueries, JOINs, and CTEs. No separate [parameter queries](/sync/rules/overview#parameter-queries). The syntax is closer to plain SQL and supports more SQL features than Sync Rules.
+
+5. **Framework integration**: [React hooks and Kotlin Compose](/sync/streams/client-usage#framework-integrations) extensions let your UI components automatically manage subscriptions based on what's rendered.
+
+### Still Need Offline-First?
+
+If you want "sync everything upfront" behavior (like Sync Rules), set [`auto_subscribe: true`](/sync/streams/overview#using-auto-subscribe) on your Sync Streams and clients will subscribe automatically when they connect.
+
+## Requirements
+
+- PowerSync Service v1.20.0+ (Cloud instances already meet this)
+- Latest SDK versions with [Rust-based sync client](https://releases.powersync.com/announcements/improved-sync-performance-in-our-client-sdks) (enabled by default on latest SDKs)
+- `config: edition: 3` in your sync config
+
+
+
+| SDK | Minimum Version | Rust Client Default |
+|-----|-----------------|---------------------|
+| JS Web | v1.27.0 | v1.32.0 |
+| React Native | v1.25.0 | v1.29.0 |
+| React hooks | v1.8.0 | — |
+| Node.js | v0.11.0 | v0.16.0 |
+| Capacitor | v0.0.1 | v0.3.0 |
+| Dart/Flutter | v1.16.0 | v1.17.0 |
+| Kotlin | v1.7.0 | v1.9.0 |
+| Swift | v1.11.0 | v1.8.0 |
+| .NET | v0.0.8-alpha.1 | v0.0.5-alpha.1 |
+
+
+
+If you're on an SDK version below the "Rust Client Default" version, enable the Rust client manually:
+
+**JavaScript:**
+```js
+await db.connect(new MyConnector(), {
+ clientImplementation: SyncClientImplementation.RUST
+});
+```
+
+**Dart:**
+```dart
+database.connect(
+ connector: YourConnector(),
+ options: const SyncOptions(
+ syncImplementation: SyncClientImplementation.rust,
+ ),
+);
+```
+
+**Kotlin:**
+```kotlin
+database.connect(MyConnector(), options = SyncOptions(
+ newClientImplementation = true,
+))
+```
+
+**Swift:**
+```swift
+@_spi(PowerSyncExperimental) import PowerSync
+
+try await db.connect(connector: connector, options: ConnectOptions(
+ newClientImplementation: true,
+))
+```
+
+
+
+## Migration Tool
+
+You can generate a Sync Streams draft from your existing Sync Rules in two ways:
+
+1. **Dashboard:** In the [PowerSync Dashboard](https://dashboard.powersync.com/), use the **Migrate to Sync Streams** button. It converts your Sync Rules into a Sync Streams draft that you can review before deploying.
+
+2. **CLI:** Run `powersync migrate sync-rules` to produce a Sync Streams draft from your current sync config.
+
+
+A standalone migration tool is also available [here](https://powersync-community.github.io/bucket-definitions-to-sync-streams/).
+
+
+The output uses `auto_subscribe: true` by default, preserving your existing sync-everything-upfront behavior so no client-side changes are required when you first deploy.
+
+**Next steps:** Review the draft, then deploy it (via the Dashboard or `powersync deploy sync-config`). After that, you can optionally migrate individual streams to on-demand subscriptions over time — remove `auto_subscribe: true` from specific streams and update client code to use the `syncStream()` API where it makes sense for your app.
+
+## Stream Definition Reference
+
+
+
+## Migration Examples
+
+### Global Data (No Parameters)
+
+In Sync Rules, a ["global" bucket](/sync/rules/global-buckets) syncs the same data to all users. In Sync Streams, you achieve this with queries that have no parameters. Add [`auto_subscribe: true`](/sync/streams/overview#using-auto-subscribe) to maintain the Sync Rules behavior where data syncs automatically on connect.
+
+**Sync Rules:**
+```yaml
+bucket_definitions:
+ global:
+ data:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE archived = false
+```
+
+**Sync Streams:**
+```yaml
+config:
+ edition: 3
+
+streams:
+ shared_data:
+ auto_subscribe: true # Sync automatically like Sync Rules
+ queries:
+ - SELECT * FROM todos
+ - SELECT * FROM lists WHERE archived = false
+```
+
+
+Without `auto_subscribe: true`, clients would need to explicitly subscribe to these streams. This gives you flexibility to migrate incrementally or switch to on-demand syncing later.
+
+
+### User-Scoped Data
+
+**Sync Rules:**
+```yaml
+bucket_definitions:
+ user_lists:
+ priority: 1
+ parameters: SELECT request.user_id() as user_id
+ data:
+ - SELECT * FROM lists WHERE owner_id = bucket.user_id
+```
+
+**Sync Streams:**
+```yaml
+config:
+ edition: 3
+
+streams:
+ user_lists:
+ auto_subscribe: true
+ priority: 1
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+```
+
+### Data with Subqueries (Replaces Parameter Queries)
+
+**Sync Rules:**
+```yaml
+bucket_definitions:
+ owned_lists:
+ parameters: |
+ SELECT id as list_id FROM lists WHERE owner_id = request.user_id()
+ data:
+ - SELECT * FROM lists WHERE lists.id = bucket.list_id
+ - SELECT * FROM todos WHERE todos.list_id = bucket.list_id
+```
+
+**Sync Streams:**
+```yaml
+config:
+ edition: 3
+
+streams:
+ owned_lists:
+ auto_subscribe: true
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+ list_todos:
+ query: |
+ SELECT * FROM todos
+ WHERE list_id = subscription.parameter('list_id')
+ AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+```
+
+### Client Parameters → Subscription Parameters
+
+**Sync Rules** used global [Client Parameters](/sync/rules/client-parameters):
+```yaml
+bucket_definitions:
+ posts:
+ parameters: SELECT (request.parameters() ->> 'current_page') as page_number
+ data:
+ - SELECT * FROM posts WHERE page_number = bucket.page_number
+```
+
+**Sync Streams** use Subscription Parameters, which are more flexible — you can subscribe multiple times with different values:
+```yaml
+config:
+ edition: 3
+
+streams:
+ posts:
+ query: SELECT * FROM posts WHERE page_number = subscription.parameter('page_number')
+```
+
+```js
+// Subscribe to multiple pages simultaneously
+const page1 = await db.syncStream('posts', { page_number: 1 }).subscribe();
+const page2 = await db.syncStream('posts', { page_number: 2 }).subscribe();
+```
+
+## Parameter Syntax Changes
+
+| Sync Rules | Sync Streams |
+|------------|--------------|
+| `request.user_id()` | `auth.user_id()` |
+| `request.jwt() ->> 'claim'` | `auth.parameter('claim')` |
+| `request.parameters() ->> 'key'` | `subscription.parameter('key')` ([subscription parameter](/sync/streams/parameters#subscription-parameters)) or `connection.parameter('key')` ([connection parameter](/sync/streams/parameters#connection-parameters)) |
+| `bucket.param_name` | Use the parameter directly in the query e.g. `subscription.parameter('key')` |
+
+## Client-Side Changes
+
+After updating your sync config, update your client code to use subscriptions:
+
+```js
+// Before (Sync Rules with Client Parameters)
+await db.connect(connector, {
+ params: { current_project: projectId }
+});
+
+// After (Sync Streams with Subscriptions)
+await db.connect(connector);
+const sub = await db.syncStream('project_data', { project_id: projectId }).subscribe();
+```
+
+See [Client-Side Usage](/sync/streams/client-usage) for detailed examples.
diff --git a/sync/streams/overview.mdx b/sync/streams/overview.mdx
index 09fdd60c..7a7e3729 100644
--- a/sync/streams/overview.mdx
+++ b/sync/streams/overview.mdx
@@ -1,518 +1,289 @@
---
-title: "Sync Streams (Early Alpha)"
-description: Sync Streams will replace Sync Rules and are designed to allow for more dynamic syncing, while not compromising on existing offline-first capabilities.
-sidebarTitle: "Overview"
+title: "Sync Streams"
+description: Sync Streams enable partial syncing, letting you define exactly which data from your backend can sync to each client using simple SQL-like queries.
+sidebarTitle: "Quickstart"
---
-## Motivation
+import StreamDefinitionReference from '/snippets/stream-definition-reference.mdx';
-PowerSync's original [Sync Rules](/sync/rules/overview) system was optimized for offline-first use cases where you want to “sync everything upfront” when the client connects, so that data is available locally if a user goes offline at any point.
+Instead of syncing entire tables, you tell PowerSync exactly which data each user/client can sync. You write simple SQL-like queries to define streams of data, and your client app subscribes to the streams it needs. PowerSync handles the rest, keeping data in sync in real-time and making it available offline.
-However, many developers are building apps where users are mostly online, and you don't want to make users wait to sync a lot of data upfront. In these cases, it's more suited to sync data on-demand. This is especially true for web apps: users are mostly online and you often want to sync only the data needed for the current page. Users also frequently have multiple tabs open, each needing different subsets of data.
+For example, you might create a stream that syncs only the current user's to-do items, another for shared projects they have access to, and another for reference data that everyone needs. Your app subscribes to these streams on demand, and only that data syncs to the client-side SQLite database.
-Sync engines like PowerSync are still great for these online web app use cases, because they provide you with real-time updates, simplified state management, and ease of working with data locally.
-
-[Client Parameters](/sync/rules/client-parameters) in the current Sync Rules system support on-demand syncing across different browser tabs to some extent: For example, using a `project_ids` array as a Client Parameter to sync only specific projects. However, manually managing these arrays across different browser tabs becomes quite painful.
-
-We are introducing **Sync Streams** to provide the best of both worlds: support for dynamic on-demand syncing, as well as "syncing everything upfront".
-
-Key improvements in Sync Streams over Sync Rules include:
-
-1. **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters.
-2. **Temporary caching-like behavior**: Each subscription includes a configurable `ttl` that keeps data active after your app unsubscribes, acting as a warm cache for recently accessed data.
-3. **Simpler developer experience**: Simplified syntax and mental model, and capabilities such as your UI components automatically managing subscriptions (for example, React hooks).
+
+**Beta Release**
-If you want “sync everything upfront” behavior (like the current Sync Rules system), that’s easy too: you can configure any of your Sync Streams to be auto-subscribed by the client on connecting.
+Sync Streams are now in beta and production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to [migrate from Sync Rules](/sync/streams/migration).
+We welcome your feedback — please share with us in [Discord](https://discord.gg/powersync).
+
-
-**Early Alpha Release**
+## Defining Streams
-Sync Streams will ultimately replace the current Sync Rules system. They are currently in an early alpha release, which of course means they're not yet suitable for production use, and the APIs and DX likely still need refinement.
+Streams are defined in a YAML configuration file. Each stream has a **name** and a **query** that specifies which rows to sync using SQL-like syntax. The query can reference [parameters](/sync/overview#how-it-works) like the authenticated user's ID to personalize what each user receives.
-They are open for anyone to test: we are actively seeking your feedback on their performance for your use cases, the developer experience, missing capabilities, and potential optimizations. Please share your feedback with us in Discord 🫡
+
+
+In the [PowerSync Dashboard](https://dashboard.powersync.com/):
-Sync Streams will be supported alongside Sync Rules for the foreseeable future, although we recommend migrating to Sync Streams once in Beta.
-
+1. Select your project and instance
+2. Go to **Sync Streams**
+3. Edit the YAML directly in the dashboard
+4. Click **Deploy** to validate and deploy
-## Requirements for Using Sync Streams
+```yaml
+config:
+ edition: 3
-* v1.15.0 of the PowerSync Service (Cloud instances are already on this version)
-* Minimum SDK versions:
- * JS:
- * Web: v1.27.0
- * React Native: v1.25.0
- * React hooks: v1.8.0
- * Dart: v1.16.0
- * Kotlin: v1.7.0
- * .NET: v0.0.8-alpha.1
- * Swift: v1.11.0
-* Use of the [Rust-based sync client](https://releases.powersync.com/announcements/improved-sync-performance-in-our-client-sdks). The Rust-based sync client is enabled by default on the latest version of all SDKs. If you are on a lower version, follow the instructions below to enable it.
+streams:
+ todos:
+ query: SELECT * FROM todos WHERE owner_id = auth.user_id()
+```
+
+
+
+Add a `sync_config` section to your `config.yaml`. Using a **separate file** is recommended (e.g. `sync_config: path: sync-config.yaml`). Put the stream definition in that file:
-
-
-
- The Rust client became the default in Web SDK v1.32.0, React Native SDK v1.29.0, Node.js SDK v0.16.0, and Capacitor SDK v0.3.0. For lower versions, pass the `clientImplementation` option when connecting:
+```yaml sync-config.yaml
+config:
+ edition: 3
- ```js
- await db.connect(new MyConnector(), {
- clientImplementation: SyncClientImplementation.RUST
- });
- ```
+streams:
+ todos:
+ query: SELECT * FROM todos WHERE owner_id = auth.user_id()
+```
- You can migrate back to the JavaScript client later by removing the option.
-
-
- The Rust client became the default in Flutter/Dart SDK v1.17.0. Pass the `syncImplementation` option when connecting:
+You can also use inline `sync_config: content: |` with the YAML nested in your main config. See [Self-Hosted Instance Configuration](/configuration/powersync-service/self-hosted-instances#sync-streams--sync-rules) for both options.
+
+
- ```dart
- database.connect(
- connector: YourConnector(),
- options: const SyncOptions(
- syncImplementation: SyncClientImplementation.rust,
- ),
- );
- ```
+Available stream options:
- You can migrate back to the Dart client later by removing the option.
-
-
- The Rust client became the default in Kotlin SDK v1.9.0. For lower versions, pass the `newClientImplementation` option when connecting:
+
- ```kotlin
- //@file:OptIn(ExperimentalPowerSyncAPI::class)
- database.connect(MyConnector(), options = SyncOptions(
- newClientImplementation = true,
- ))
- ```
+## Basic Examples
- You can migrate back to the Kotlin client later by removing the option.
-
-
- The Rust client became the default in Swift SDK v1.8.0. For lower versions, pass the `newClientImplementation` option when connecting:
+There are two independent concepts to understand:
- ```swift
- @_spi(PowerSyncExperimental) import PowerSync
+- _What_ data the stream returns. For example:
+ - *Global data*: No parameters. Same data for all users (e.g. reference tables like categories).
+ - *Filtered data*: Filters the data by a parameter value. This can make use of _auth parameters_ from the JWT token (such as the user ID or other JWT claims), _subscription parameters_ (specified by the client when it subscribes to a stream at any time), or _connection parameters_ (specified at connection). Different users will get different sets of data based on the parameters. See [Using Parameters](/sync/streams/parameters) for the full reference.
+- _When_ the client syncs the data
+ - *Auto-subscribe*: Client automatically subscribes on connect (`auto_subscribe: true`)
+ - *On-demand*: Client explicitly subscribes when needed (default behavior)
- try await db.connect(connector: connector, options: ConnectOptions(
- newClientImplementation: true,
- ))
- ```
+### Global Data
- You can migrate back to the Swift client later by removing the option.
-
-
- The Rust client was introduced as the default in .NET SDK v0.0.5-alpha.1. No additional configuration is required.
-
-
-
-* Sync Stream definitions. They are currently defined in the same YAML file as Sync Rules: `sync_rules.yaml` (PowerSync Cloud) or `config.yaml` (Open Edition/self-hosted). To enable Sync Streams, add the following configuration:
-
- ```yaml sync_rules.yaml
- config:
- # see https://docs.powersync.com/sync/advanced/compatibility
- # this edition also deploys several backwards-incompatible fixes
- # see the docs for details
- edition: 2
-
- streams:
- ... # see 'Stream Definition Syntax' section below
- ```
-
-## Stream Definition Syntax
-
-You specify **stream definitions** similar to bucket definitions in Sync Rules. Clients then subscribe to the defined streams one or more times, with different parameters.
-
-Syntax:
-```yaml sync_rules.yaml
-streams:
- :
- query: string # similar to Data Queries in Sync Rules, but also support limited subqueries.
- auto_subscribe: boolean # true to subscribe to this stream by default (similar to how Sync Rules work), false (default) if clients should explicitly subscribe.
- priority: number # sync priority, same as in Sync Rules: https://docs.powersync.com/sync/advanced/prioritized-sync
- accept_potentially_dangerous_queries: boolean # silence warnings on dangerous queries, same as in Sync Rules.
-```
+Data without parameters is "global" data, meaning the same data goes to all users/clients. This is useful for reference tables:
-Basic example:
-```yaml sync_rules.yaml
+```yaml
config:
- edition: 2
+ edition: 3
+
streams:
- issue: # Define a stream to a specific issue
- query: select * from issues where id = subscription.parameters() ->> 'id'
- issue_comments: # Define a stream to a specific issue's comments
- query: select * from comments where issue_id = subscription.parameters() ->> 'id'
+ # Same categories for everyone
+ categories:
+ query: SELECT * FROM categories
+ # Same active products for everyone
+ products:
+ query: SELECT * FROM products WHERE active = true
```
+
+Global data streams still require clients to subscribe explicitly unless you set `auto_subscribe: true`
+
-### Just Queries with Subqueries
+### Filtering Data by User
-Whereas Sync Rules had separate [Parameter Queries](/sync/rules/parameter-queries) and [Data Queries](/sync/rules/data-queries), Sync Streams only have a `query`. Instead of Parameter Queries, Sync Streams can use parameters directly in the query, and support a limited form of subqueries. For example:
+Use `auth.user_id()` or other [JWT claims](/sync/streams/parameters#auth-parameters) to return different data per user:
-```yaml sync_rules.yaml
-# use parameters directly in the query (see below for details on accessing parameters)
-select * from issues where id = subscription.parameters() ->> 'id' and owner_id = auth.user_id()
+```yaml
+config:
+ edition: 3
-# "in (subquery)" replaces parameter queries:
-select * from comments where issue_id in (select id from issues where owner_id = auth.user_id())
-```
+streams:
+ # Each user gets their own lists
+ my_lists:
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
-Under the hood, Sync Streams use the same bucket system as Sync Rules, so you get the same functionality as before with Parameter Queries, however, the Sync Streams syntax is closer to plain SQL.
+ # Each user gets their own orders
+ my_orders:
+ query: SELECT * FROM orders WHERE user_id = auth.user_id()
+```
+### Filtering Data Based on Subscription Parameters
-### Accessing Parameters
+Use `subscription.parameter()` for data that clients subscribe to explicitly:
-We have streamlined how different kinds of parameters are accessed in Sync Streams [compared](/sync/rules/parameter-queries) to Sync Rules.
+```yaml
+config:
+ edition: 3
-**Subscription Parameters**: Passed from the client when it subscribes to a Sync Stream. See [Client-Side Syntax](#client-side-syntax) below. Clients can subscribe to the same stream multiple times with
-different parameters:
+streams:
+ # Sync todos for a specific list when the client subscribes with a list_id
+ list_todos:
+ query: |
+ SELECT * FROM todos
+ WHERE list_id = subscription.parameter('list_id')
+ AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+```
-```yaml
-subscription.parameters() # all parameters for the subscription, as JSON
-subscription.parameter('key') # shorthand for getting a single specific parameter
+```js
+// Client subscribes with the list they want to view
+const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
```
-**Auth Parameters**: Claims from the JWT:
+### Using Auto-Subscribe
+
+Set `auto_subscribe: true` to sync data automatically when clients connect. This is useful for:
+- Reference data that all users need, or that are needed in many screens in the app.
+- User data that should always be available offline
+- Maintaining [Sync Rules](/sync/rules/overview) default behavior ("sync everything upfront") when migrating to Sync Streams
```yaml
-auth.parameters() # JWT token payload, as JSON
-auth.parameter('key') # short-hand for getting a single specific token payload parameter
-auth.user_id() # same as auth.parameter('sub')
+config:
+ edition: 3
+
+streams:
+ # Global data, synced automatically
+ categories:
+ auto_subscribe: true
+ query: SELECT * FROM categories
+
+ # User-scoped data, synced automatically
+ my_orders:
+ auto_subscribe: true
+ query: SELECT * FROM orders WHERE user_id = auth.user_id()
+
+ # Parameterized data, subscribed on-demand (no auto_subscribe)
+ order_items:
+ query: |
+ SELECT * FROM order_items
+ WHERE order_id = subscription.parameter('order_id')
+ AND order_id IN (SELECT id FROM orders WHERE user_id = auth.user_id())
```
-**Connection Parameters**: Specified "globally" on the connection level. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules:
+## Client-Side Usage
-```yaml
-connection.parameters() # all parameters for the connection, as JSON
-connection.parameter('key') # shorthand for getting a single specific parameter
-```
+Subscribe to streams from your client app:
-### Usage Examples: Sync Rules vs Sync Streams
+
+
+```js
+const sub = await db.syncStream('list_todos', { list_id: 'abc123' })
+ .subscribe({ ttl: 3600 });
-
+// Wait for this subscription to have synced
+await sub.waitForFirstSync();
-### Global data
-**Sync Rules:**
-```yaml sync_rules.yaml
- bucket_definitions:
- global:
- data:
- # Sync all todos
- - SELECT * FROM todos
- # Sync all lists except archived ones
- - SELECT * FROM lists WHERE archived = false
+// When the component needing the subscription is no longer active...
+sub.unsubscribe();
```
-**Sync Streams:** "Global" data — the data you want all of your users to have by default — is also defined as streams. Specify `auto_subscribe: true` so your users subscribe to them by default.
-```yaml sync_rules.yaml
- streams:
- all_todos:
- query: SELECT * FROM todos
- auto_subscribe: true
- unarchived_lists:
- query: SELECT * FROM lists WHERE archived = false
- auto_subscribe: true
-```
+**React hooks:**
-### A user's owned lists, with a priority
-**Sync Rules:**
-```yaml sync_rules.yaml
- bucket_definitions:
- user_lists:
- priority: 1 # See https://docs.powersync.com/sync/advanced/prioritized-sync
- parameters: SELECT request.user_id() as user_id
- data:
- - SELECT * FROM lists WHERE owner_id = bucket.user_id
+```jsx
+const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: 'abc123' } });
+// Check download progress or subscription information
+stream?.progress;
+stream?.subscription.hasSynced;
```
-**Sync Streams:**
-```yaml sync_rules.yaml
- streams:
- user_lists:
- priority: 1 # See https://docs.powersync.com/sync/advanced/prioritized-sync
- query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+The `useQuery` hook can wait for Sync Streams before running queries:
+
+```jsx
+const { data } = useQuery(
+ 'SELECT * FROM todos WHERE list_id = ?',
+ [listId],
+ { streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] }
+);
```
+
+
+
+```dart
+final sub = await db
+ .syncStream('list_todos', {'list_id': 'abc123'})
+ .subscribe(ttl: const Duration(hours: 1));
-### Grouping by `list_id`
-**Sync Rules:**
-```yaml sync_rules.yaml
- bucket_definitions:
- owned_lists:
- parameters: |
- SELECT id as list_id FROM lists WHERE
- owner_id = request.user_id()
- data:
- - SELECT * FROM lists WHERE lists.id = bucket.list_id
- - SELECT * FROM todos WHERE todos.list_id = bucket.list_id
+// Wait for this subscription to have synced
+await sub.waitForFirstSync();
+
+// When the component needing the subscription is no longer active...
+sub.unsubscribe();
```
-**Sync Streams:**
-```yaml sync_rules.yaml
- streams:
- owned_lists:
- query: SELECT * FROM lists WHERE owner_id = auth.user_id()
- list_todos:
- query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
-
+
+
+
+```kotlin
+val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123")))
+ .subscribe(ttl = 1.0.hours)
+
+// Wait for this subscription to have synced
+sub.waitForFirstSync()
+
+// When the component needing the subscription is no longer active...
+sub.unsubscribe()
```
-### Parameters usage
-**Sync Rules:**
-```yaml sync_rules.yaml
- bucket_definitions:
- posts:
- parameters: SELECT (request.parameters() ->> 'current_page') as page_number
- data:
- - SELECT * FROM posts WHERE page_number = bucket.page_number
+
+
+
+```swift
+let sub = try await db.syncStream(name: "list_todos", params: ["list_id": JsonValue.string("abc123")])
+ .subscribe(ttl: 60 * 60, priority: nil) // 1 hour
+
+// Wait for this subscription to have synced
+try await sub.waitForFirstSync()
+
+// When the component needing the subscription is no longer active...
+try await sub.unsubscribe()
```
-**Sync Streams:**
-```yaml sync_rules.yaml
- streams:
- posts:
- query: SELECT * FROM posts WHERE page_number = subscription.parameter('page_number')
+
+
+
+```csharp
+var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" })
+ .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) });
+
+// Wait for this subscription to have synced
+await sub.WaitForFirstSync();
+
+// When the component needing the subscription is no longer active...
+sub.Unsubscribe();
```
-Note that the behavior here is different to Sync Rules because `subscription.parameter('page_number')` is local to the subscription, so the Sync Stream can be subscribed to multiple times with different page numbers, whereas Sync Rules only allow a single global Client Parameter value at a time. Connection Parameters (`connection.parameter()`) are available in Sync Streams as the equivalent of the global Client Parameters in Sync Rules, but Subscription Parameters are recommended because they are much more flexible.
-
-### Specific columns/fields, renames and transformations
-
-Selecting, renaming or transforming specific columns/fields is identical between Sync Rules and Sync Streams:
-
-```yaml sync_rules.yaml
- streams:
- todos:
- # Example 1: Select specific columns
- query: SELECT id, name, owner_id FROM todos
-
- # Example 2: Rename columns
- # query: SELECT id, name, created_timestamp AS created_at FROM todos
-
- # Example 3: Cast number to text
- # query: SELECT id, item_number :: text AS item_number FROM todos
-
- # Example 4: Alternative syntax for the same cast
- # query: id, CAST(item_number as TEXT) AS item_number FROM todos
-
- # Example 5: Convert binary data (bytea) to base64
- # query: id, base64(thumbnail) AS thumbnail_base64 FROM todos
-
- # Example 6: Extract field from JSON or JSONB column
- # query: id, metadata_json ->> 'description' AS description FROM todos
-
- # Example 7: Convert time to epoch number
- # query: id, unixepoch(created_at) AS created_at FROM todos
+
+
+
+### TTL (Time-To-Live)
+
+Each subscription has a `ttl` that keeps data cached after unsubscribing. This enables warm cache behavior — when users return to a screen and you re-subscribe to relevant streams, data is already available on the client. Default TTL is 24 hours. See [Client-Side Usage](/sync/streams/client-usage) for details.
+
+```js
+// Set TTL in seconds when subscribing
+const sub = await db.syncStream('todos', { list_id: 'abc' })
+ .subscribe({ ttl: 3600 }); // Cache for 1 hour after unsubscribe
```
-
+## Developer Notes
+- **SQL Syntax**: Stream queries use a SQL-like syntax with `SELECT` statements. You can use subqueries, `INNER JOIN`, and [CTEs](/sync/streams/ctes) for filtering. `GROUP BY`, `ORDER BY`, and `LIMIT` are not supported. See [Writing Queries](/sync/streams/queries) for details on joins, multiple queries per stream, and other features.
-## Client-Side Syntax
+- **Type Conversion**: Data types from your source database (Postgres, MongoDB, MySQL, SQL Server) are converted when synced to the client's SQLite database. SQLite has a limited type system, so most types become `text` and you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled.
-In general, each SDK lets you:
+- **Primary Key**: PowerSync requires every synced table to have a primary key column named `id` of type `text`. If your backend uses a different column name or type, you'll need to map it. For MongoDB, collections use `_id` as the ID field; you must alias it in your stream queries (e.g. `SELECT _id as id, * FROM your_collection`).
-* Use `db.syncStream(name, [subscription-params])` to get a `SyncStream` instance.
-* Call `subscribe()` on a `SyncStream` to get a `SyncStreamSubscription`. This gives you access to `waitForFirstSync()` and `unsubscribe()`.
-* Inspect `SyncStatus` for a list of `SyncSubscriptionDefinition`s describing all Sync Streams your app is subscribed to (either due to an explicit subscription or because the Sync Stream has `auto_subscribe: true`). It also reports per-stream download progress.
-* Each Sync Stream has a `ttl` (time-to-live). After you call `unsubscribe()`, or when the page/app closes, the stream keeps syncing for the `ttl` duration, enabling caching-like behavior. Each SDK lets you specify the `ttl`, or ignore the `ttl` and delete the data as soon as possible. If not specified, a default TTL of 24 hours applies.
+- **Case Sensitivity**: To avoid issues across different databases and platforms, use **lowercase identifiers** for all table and column names in your Sync Streams. If your backend uses mixed case, see [Case Sensitivity](/sync/advanced/case-sensitivity) for how to handle it.
-Select your language for specific examples:
-
-
- ```js
- const sub = await powerSync.syncStream('issues', {id: 'issue-id'}).subscribe(ttl: 3600);
-
- // Resolve current status for subscription
- const status = powerSync.currentStatus.forStream(sub);
- const progress = status?.progress;
-
- // Wait for this subscription to have synced
- await sub.waitForFirstSync();
-
- // When the component needing the subscription is no longer active...
- sub.unsubscribe();
- ```
-
- If you're using React, you can also use hooks to automatically subscribe components to Sync Streams:
-
- ```js
- const stream = useSyncStream({ name: 'todo_list', parameters: { list: 'foo' } });
- // Can then check for download progress or subscription information
- stream?.progress;
- stream?.subscription.hasSynced;
- ```
-
- This hook is useful when you want to explicitly ensure a stream is active (for example a root component) or when you need progress/hasSynced state; this makes data available for all child components without each query declaring the stream.
-
- Additionally, the `useQuery` hook for React can wait for Sync Streams to be complete before running
- queries. Pass `streams` only when the component knows which specific stream subscription(s) it depends on and it should wait before querying.
-
- ```js
- const results = useQuery(
- 'SELECT ...',
- queryParameters,
- // This will wait for the stream to sync before running the query
- { streams: [{ name: 'todo_list', parameters: { list: 'foo' }, waitForStream: true }] }
- );
- ```
-
-
-
- ```dart
- final sub = await db
- .syncStream('issues', {'id': 'issue-id'})
- .subscribe(ttl: const Duration(hours: 1));
-
- // Resolve current status for subscription
- final status = db.currentStatus.forStream(sub);
- final progress = status?.progress;
-
- // Wait for this subscription to have synced
- await sub.waitForFirstSync();
-
- // When the component needing the subscription is no longer active...
- sub.unsubscribe();
- ```
-
-
-
- ```Kotlin
- val sub = database.syncStream("issues", mapOf("id" to JsonParam.String("issue-id"))).subscribe(ttl = 1.0.hours);
-
- // Resolve current status for subscription
- val status = database.currentStatus.forStream(sub)
- val progress = status?.progress
-
- // Wait for this subscription to have synced
- sub.waitForFirstSync()
-
- // When the component needing the subscription is no longer active...
- sub.unsubscribe()
- ```
-
- If you're using Compose, you can use the `composeSyncStream` extension to subscribe to a stream while
- a composition is active:
-
- ```Kotlin
- @Composable
- fun TodoListPage(db: PowerSyncDatabase, id: String) {
- val syncStream = db.composeSyncStream(name = "list", parameters = mapOf("list_id" to JsonParam.String(id)))
- // Define component based on stream state
- }
- ```
-
-
-
- ```csharp
- var sub = await db.SyncStream("issues", new() { ["id"] = "issue-id" })
- .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) });
-
- // Resolve current status for subscription
- var status = db.CurrentStatus.ForStream(sub);
- var progress = status?.Progress;
-
- // Wait for this subscription to have synced
- await sub.WaitForFirstSync();
-
- // When the component needing the subscription is no longer active...
- sub.Unsubscribe();
- ```
-
-
-
- ```swift
- let sub = try await db.syncStream(name: "issues", params: ["id": JsonValue.string("issue-id")]).subscribe(
- ttl: 60 * 60, // 1 hour
- priority: nil
- );
-
- // Resolve current status for subscription
- let status = db.currentStatus.forStream(stream: sub)
- let progress = status?.progress
-
- // Wait for this subscription to have synced
- try await sub.waitForFirstSync()
-
- // When the component needing the subscription is no longer active...
- try await sub.unsubscribe()
- ```
-
-
- ```rust
- let stream = db.sync_stream(
- "issues",
- Some(&{
- let mut params = serde_json::Map::new();
- params.insert(
- "id".to_string(),
- serde_json::Value::String("issue-id".to_string()),
- );
- serde_json::Value::Object(params)
- }),
- );
-
- let sub = stream
- .subscribe_with({
- let mut options = StreamSubscriptionOptions::default();
- options.with_ttl(Duration::from_hours(1));
- options
- })
- .await
- .expect("could not subscribe");
-
- // Resolve current status for subscription.
- let status = db.status();
- let status = status.for_stream(&sub /* or &stream */);
- let progress = status.and_then(|f| f.progress);
-
- // Wait for this subscription to have synced
- sub.wait_for_first_sync().await;
-
- // When the component needing the subscription is no longer active...
- // In Rust, simply dropping the StreamSubscription is enough too. So
- // subscriptions should be referenced until they're no longer used.
- sub.unsubscribe();
- ```
-
-
+- **Bucket Limits**: PowerSync uses internal partitions called [buckets](/architecture/powersync-service#bucket-system) to efficiently sync data. There's a default [limit of 1,000 buckets](/resources/performance-and-limits) per user/client. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. You can use [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count.
-## Examples
+- **Troubleshooting**: If data isn't syncing as expected, the [Sync Diagnostics Client](/tools/diagnostics-client) helps you inspect what's happening for a specific user — you can see which buckets the user has and what data is being synced.
-
-
- Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README.
-
- In this demo:
-
- - The app syncs `lists` by default (demonstrating equivalent behavior to Sync Rules, i.e. optimized for offline-first).
- - The app syncs `todos` on demand when a user opens a list.
- - When the user navigates back to the same list, they won't see a loading state — demonstrating caching behavior.
-
-
-
- Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which we updated to use Sync Streams (Sync Rules are still supported).
-
- Deploy the following Sync Streams:
-
- ```yaml sync_rules.yaml
- config:
- edition: 2
- streams:
- lists:
- query: SELECT * FROM lists
- auto_subscribe: true
- todos:
- query: SELECT * FROM todos WHERE list_id = subscription.parameter('list')
- ```
-
- In this demo:
-
- - The app syncs `lists` by default (demonstrating equivalent behavior to Sync Rules, i.e. optimized for offline-first).
- - The app syncs `todos` on demand when a user opens a list.
- - When the user navigates back to the same list, they won't see a loading state — demonstrating caching behavior.
-
-
- In progress, follow along: https://github.com/powersync-ja/powersync-kotlin/pull/270
-
-
- Example coming soon.
-
-
\ No newline at end of file
+## Examples & Demos
+
+See [Examples & Demos](/sync/streams/examples) for working demo apps and complete application patterns.
+
+## Migrating from Legacy Sync Rules
+
+If you have an existing project using legacy Sync Rules, see the [Migration Guide](/sync/streams/migration) for step-by-step instructions, syntax changes, and examples.
diff --git a/sync/streams/parameters.mdx b/sync/streams/parameters.mdx
new file mode 100644
index 00000000..3562e0c6
--- /dev/null
+++ b/sync/streams/parameters.mdx
@@ -0,0 +1,146 @@
+---
+title: "Using Parameters"
+description: Filter data dynamically using subscription, auth, and connection parameters in your stream queries.
+---
+
+Parameters let you filter data dynamically based on who the user is and what they need to see. Sync Streams support three types of parameters, each serving a different purpose.
+
+## Subscription Parameters
+
+Passed from the client when it subscribes to a stream. This is the most common way to request specific data on demand.
+
+For example, if a user opens two different to-do lists, the client subscribes to the same `list_todos` stream twice, once for each list:
+
+```yaml
+streams:
+ list_todos:
+ query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id')
+```
+
+```js
+// User opens List A - subscribe with list_id = 'list-a'
+const subA = await db.syncStream('list_todos', { list_id: 'list-a' }).subscribe();
+
+// User also opens List B - subscribe again with list_id = 'list-b'
+const subB = await db.syncStream('list_todos', { list_id: 'list-b' }).subscribe();
+
+// Both lists' todos are now syncing independently
+```
+
+| Function | Description |
+|----------|-------------|
+| `subscription.parameter('key')` | Get a single parameter by name |
+| `subscription.parameters()` | All parameters as JSON (for dynamic access) |
+
+## Auth Parameters
+
+Claims from the user's JWT token. Use these to filter data based on who the user is. These values are secure and tamper-proof since they are signed as part of the JWT by your authentication system.
+
+```yaml
+streams:
+ my_lists:
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+
+ # Access custom JWT claims
+ org_data:
+ query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id')
+```
+
+| Function | Description |
+|----------|-------------|
+| `auth.user_id()` | The user's ID (same as `auth.parameter('sub')`) |
+| `auth.parameter('key')` | Get a specific JWT claim |
+| `auth.parameters()` | Full JWT payload as JSON |
+
+## Connection Parameters
+
+Specified "globally" at the connection level, before any streams are subscribed. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. Use them when you need a value that applies across all streams for the session.
+
+```yaml
+streams:
+ app_config:
+ query: SELECT * FROM config WHERE environment = connection.parameter('environment')
+```
+
+| Function | Description |
+|----------|-------------|
+| `connection.parameter('key')` | Get a single connection parameter |
+| `connection.parameters()` | All connection parameters as JSON |
+
+
+Changing connection parameters requires reconnecting. For values that change during a session, use subscription parameters instead.
+
+
+See [Client Usage](/sync/streams/client-usage#connection-parameters) for details on specifying connection parameters in your client-side code.
+
+## When to Use Each
+
+**Subscription parameters** are the most flexible option. Use them when the client needs to choose what data to sync at runtime. Each subscription operates independently, so a user can have multiple subscriptions to the same stream with different parameters.
+
+**Auth parameters** are the most secure option. Use them when you need to filter data based on who the user is. Since these values come from the signed JWT, they can't be tampered with by the client.
+
+**Connection parameters** apply globally across all streams for the session. Use them for values that rarely change, like environment flags or feature toggles. Keep in mind that changing them requires reconnecting.
+
+For most use cases, subscription parameters are the best choice. They're more flexible and work well with modern app patterns like multiple tabs.
+
+## Expanding JSON Arrays
+
+If a user's JWT contains an array of IDs (e.g., `{ "project_ids": ["proj-1", "proj-2", "proj-3"] }`), you can expand it to sync all matching records. The example below syncs all three projects to the user/client:
+
+**Shorthand syntax** (recommended):
+
+```yaml
+streams:
+ # User's JWT contains: { "project_ids": ["proj-1", "proj-2", "proj-3"] }
+ my_projects:
+ auto_subscribe: true
+ query: SELECT * FROM projects WHERE id IN auth.parameter('project_ids')
+```
+
+**JOIN syntax** with table-valued function:
+
+```yaml
+streams:
+ my_projects:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM projects
+ JOIN json_each(auth.parameter('project_ids')) AS allowed ON projects.id = allowed.value
+```
+
+**Subquery syntax**:
+
+```yaml
+streams:
+ my_projects:
+ auto_subscribe: true
+ query: |
+ SELECT * FROM projects
+ WHERE id IN (SELECT value FROM json_each(auth.parameter('project_ids')))
+```
+
+All three sync the same data: projects whose IDs are in the user's JWT `project_ids` claim.
+
+
+`json_each()` works with auth and connection parameters. It can also be used with columns from joined tables in some cases (e.g. `SELECT * FROM lists WHERE id IN (SELECT lists.value FROM access_control a, json_each(a.allowed_lists) as lists WHERE a.user = auth.user_id())`).
+
+
+## Combining Parameters
+
+You can combine different parameter types in a single query. A common pattern is using subscription parameters for on-demand data while using auth parameters for authorization:
+
+```yaml
+streams:
+ # User subscribes with a list_id, but can only see lists they have access to
+ list_items:
+ query: |
+ SELECT * FROM items
+ WHERE list_id = subscription.parameter('list_id')
+ AND list_id IN (
+ SELECT id FROM lists
+ WHERE owner_id = auth.user_id()
+ OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id())
+ )
+```
+
+See [Writing Queries](/sync/streams/queries) for more filtering techniques using subqueries and joins.
diff --git a/sync/streams/queries.mdx b/sync/streams/queries.mdx
new file mode 100644
index 00000000..8e4838f4
--- /dev/null
+++ b/sync/streams/queries.mdx
@@ -0,0 +1,323 @@
+---
+title: "Writing Queries"
+description: Learn query syntax for filtering with subqueries and joins, selecting columns, and transforming data types.
+sidebarTitle: "Writing Queries"
+---
+
+This page covers query syntax for Sync Streams: filtering, selecting columns, and transforming data.
+
+For parameter usage, see [Using Parameters](/sync/streams/parameters). For real-world patterns, see [Examples, Patterns & Demos](/sync/streams/examples).
+
+## Basic Queries
+
+The simplest stream query syncs all rows from a table:
+
+```yaml
+streams:
+ auto_subscribe: true
+ categories:
+ query: SELECT * FROM categories
+```
+
+Add a `WHERE` clause to filter:
+
+```yaml
+streams:
+ auto_subscribe: true
+ active_products:
+ query: SELECT * FROM products WHERE active = true
+```
+
+## Filtering by User
+
+Most apps need to sync different data to different users. Use `auth.user_id()` to filter by the authenticated user:
+
+```yaml
+streams:
+ auto_subscribe: true
+ my_lists:
+ query: SELECT * FROM lists WHERE owner_id = auth.user_id()
+```
+
+This syncs only the lists owned by the current user. The user ID comes from the `sub` claim in their JWT token. See [Auth Parameters](/sync/streams/parameters#auth-parameters).
+
+## On-Demand Data with Subscription Parameters
+
+For data that should only sync when the user navigates to a specific screen, use subscription parameters. The client passes these when subscribing to a stream:
+
+```yaml
+streams:
+ list_todos:
+ query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id')
+```
+
+
+**Authorization:** This example filters only by `subscription.parameter('list_id')`. Any client can pass any `list_id`, so a user could access another user's todos. For production, add an authorization check so the user can only see lists they own or have access to — for example, add `AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id() OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id()))`. See [Combining Parameters with Subqueries](#combining-parameters-with-subqueries) below.
+
+
+```js
+// When user opens a specific list, subscribe with that list's ID
+const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe();
+```
+
+See [Using Parameters](/sync/streams/parameters) for the full reference on parameters.
+
+## Selecting Columns
+
+Select specific columns instead of `*` to reduce data transfer:
+
+```yaml
+streams:
+ users:
+ query: SELECT id, name, email, avatar_url FROM users WHERE org_id = auth.parameter('org_id')
+```
+
+### Renaming Columns
+
+Use `AS` to rename columns in the synced data:
+
+```yaml
+streams:
+ todos:
+ query: SELECT id, name, created_timestamp AS created_at FROM todos
+```
+
+### Type Transformations
+
+PowerSync syncs data to SQLite on the client. You may need to transform types for compatibility:
+
+```yaml
+streams:
+ items:
+ query: |
+ SELECT
+ id,
+ CAST(item_number AS TEXT) AS item_number, -- Cast to text
+ metadata_json ->> 'description' AS description, -- Extract field from JSON
+ base64(thumbnail) AS thumbnail_base64, -- Binary to base64
+ unixepoch(created_at) AS created_at -- DateTime to epoch
+ FROM items
+```
+
+See [Type Mapping](/sync/types) for details on how each database type is handled.
+
+## Using Subqueries
+
+Subqueries let you filter based on related tables. Use `IN (SELECT ...)` to sync data where a foreign key matches rows in another table:
+
+```yaml
+streams:
+ # Sync comments for issues owned by the current user
+ my_issue_comments:
+ query: |
+ SELECT * FROM comments
+ WHERE issue_id IN (SELECT id FROM issues WHERE owner_id = auth.user_id())
+```
+
+### Nested Subqueries
+
+Subqueries can be nested to traverse multiple levels of relationships. This is useful for normalized database schemas:
+
+```yaml
+streams:
+ # Sync tasks for projects in organizations the user belongs to
+ org_tasks:
+ query: |
+ SELECT * FROM tasks
+ WHERE project_id IN (
+ SELECT id FROM projects WHERE org_id IN (
+ SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ )
+ )
+```
+
+### Combining Parameters with Subqueries
+
+A common pattern is using subscription parameters to select what data to sync, while using subqueries for authorization:
+
+```yaml
+streams:
+ # User subscribes with a list_id, but can only see lists they own or that are shared with them
+ list_items:
+ query: |
+ SELECT * FROM items
+ WHERE list_id = subscription.parameter('list_id')
+ AND list_id IN (
+ SELECT id FROM lists
+ WHERE owner_id = auth.user_id()
+ OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id())
+ )
+```
+
+## Using Joins
+
+For complex queries that traverse multiple tables, join syntax is often easier to read than nested subqueries. You can use `JOIN` or `INNER JOIN` (they're equivalent). For the exact supported JOIN syntax and restrictions, see [Supported SQL — JOIN syntax](/sync/supported-sql#join-syntax).
+
+Consider this query:
+
+```yaml
+streams:
+ # Nested subquery version
+ user_comments:
+ query: |
+ SELECT * FROM comments WHERE issue_id IN (
+ SELECT id FROM issues WHERE project_id IN (
+ SELECT project_id FROM project_members WHERE user_id = auth.user_id()
+ )
+ )
+```
+
+The same query using joins:
+
+```yaml
+streams:
+ # Join version - same result, easier to read
+ user_comments:
+ query: |
+ SELECT comments.* FROM comments
+ INNER JOIN issues ON comments.issue_id = issues.id
+ INNER JOIN project_members ON issues.project_id = project_members.project_id
+ WHERE project_members.user_id = auth.user_id()
+```
+
+Both queries sync the same data. Choose whichever style is clearer for your use case.
+
+### Multiple Joins
+
+You can chain multiple joins to traverse complex relationships. This example joins four tables to sync checkpoints for assignments the user has access to.
+
+```yaml
+streams:
+ my_checkpoints:
+ query: |
+ SELECT checkpoint.* FROM user_assignment_scope uas
+ JOIN assignment a ON a.id = uas.assignment_id
+ JOIN assignment_checkpoint ac ON ac.assignment_id = a.id
+ JOIN checkpoint ON checkpoint.id = ac.checkpoint_id
+ WHERE uas.user_id = auth.user_id()
+ AND a.active = true
+```
+
+### Self-Joins
+
+You can join the same table multiple times; aliases are required to distinguish each occurrence (e.g. `gm1` and `gm2` for the two `group_memberships` joins). This is useful for finding related records through a shared relationship — for example, finding all users who share a group with the current user:
+
+```yaml
+streams:
+ users_in_my_groups:
+ query: |
+ SELECT users.* FROM users
+ JOIN group_memberships gm1 ON users.id = gm1.user_id
+ JOIN group_memberships gm2 ON gm1.group_id = gm2.group_id
+ WHERE gm2.user_id = auth.user_id()
+```
+
+### Join Limitations
+
+When writing stream queries with JOINs, keep in mind: use only `JOIN` or `INNER JOIN`; select columns from a single table (e.g. `comments.*`); and use simple equality conditions (`table1.column = table2.column`). For the full list of supported JOIN syntax and invalid examples, see [Supported SQL — JOIN syntax](/sync/supported-sql#join-syntax).
+
+## Multiple Queries per Stream
+
+You can group multiple queries into a single stream using `queries` instead of `query`. This is useful when several tables share the same access pattern:
+
+```yaml
+streams:
+ user_data:
+ auto_subscribe: true
+ queries:
+ - SELECT * FROM notes WHERE owner_id = auth.user_id()
+ - SELECT * FROM settings WHERE user_id = auth.user_id()
+ - SELECT * FROM preferences WHERE user_id = auth.user_id()
+```
+
+You subscribe once to the stream; PowerSync merges the data from all queries efficiently. This is more efficient than defining separate streams, each requiring its own subscription.
+
+### When to Use Multiple Queries
+
+Use `queries` when:
+- Multiple tables have the same filtering logic (e.g., all filtered by `user_id`)
+- You want to optimize sync by using one stream so the client subscribes once and PowerSync merges the data from all queries, and to reduce bucket count (see Developer Notes)
+- Related data should sync together
+
+```yaml
+streams:
+ # All project-related data syncs together
+ project_details:
+ queries:
+ - SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id')
+ - SELECT * FROM files WHERE project_id = subscription.parameter('project_id')
+ - SELECT * FROM comments WHERE project_id = subscription.parameter('project_id')
+```
+
+### Compatibility Requirements
+
+For multiple queries in one stream to be valid, they must use compatible parameter inputs. In practice, this means they should filter on the same parameters in the same way:
+
+```yaml
+# Valid - all queries use the same parameter pattern
+streams:
+ user_content:
+ queries:
+ - SELECT * FROM notes WHERE user_id = auth.user_id()
+ - SELECT * FROM bookmarks WHERE user_id = auth.user_id()
+
+# Valid - all queries use the same subscription parameter
+streams:
+ project_data:
+ queries:
+ - SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id')
+ - SELECT * FROM files WHERE project_id = subscription.parameter('project_id')
+```
+
+### Combining with CTEs
+
+Multiple queries work well with [Common Table Expressions (CTEs)](/sync/streams/ctes) to share the filtering logic and keep all results in one stream, requiring clients to manage one subscription instead of many:
+
+```yaml
+streams:
+ org_data:
+ auto_subscribe: true
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ queries:
+ - SELECT * FROM projects WHERE org_id IN user_orgs
+ - SELECT * FROM repositories WHERE org_id IN user_orgs
+ - SELECT * FROM team_members WHERE org_id IN user_orgs
+```
+
+## Complete Example
+
+A full configuration combining multiple techniques:
+
+```yaml
+config:
+ edition: 3
+
+streams:
+ # Global reference data (no parameters, auto-subscribed)
+ categories:
+ auto_subscribe: true
+ query: SELECT id, name, CAST(sort_order AS TEXT) AS sort_order FROM categories
+
+ # User's own items with transformed fields (auth parameter, auto-subscribed)
+ my_items:
+ auto_subscribe: true
+ query: |
+ SELECT
+ id,
+ name,
+ metadata ->> 'status' AS status,
+ unixepoch(created_at) AS created_at,
+ base64(thumbnail) AS thumbnail
+ FROM items
+ WHERE owner_id = auth.user_id()
+
+ # On-demand item details (subscription parameter with auth check)
+ item_comments:
+ query: |
+ SELECT * FROM comments
+ WHERE item_id = subscription.parameter('item_id')
+ AND item_id IN (SELECT id FROM items WHERE owner_id = auth.user_id())
+```
+
+See [Examples & Patterns](/sync/streams/examples) for real-world examples like multi-tenant apps and role-based access, and [Supported SQL](/sync/supported-sql) for all available operators and functions.
diff --git a/sync/supported-sql.mdx b/sync/supported-sql.mdx
new file mode 100644
index 00000000..96f5af3a
--- /dev/null
+++ b/sync/supported-sql.mdx
@@ -0,0 +1,355 @@
+---
+title: "Supported SQL"
+description: SQL syntax, operators, and functions supported in Sync Streams and Sync Rules queries.
+---
+
+This page documents the SQL supported in [Sync Streams](/sync/streams/overview) and [Sync Rules (legacy)](/sync/rules/overview).
+
+
+ Some fundamental restrictions on the usage of SQL expressions are:
+
+ 1. They must be deterministic — no random or time-based functions.
+ 2. No external state can be used.
+ 3. They must operate on data available within a single row/document. For example, no aggregation functions are allowed.
+
+ For parameter-specific WHERE restrictions, see [Filtering: WHERE Clause](#filtering-where-clause).
+
+
+## Query Syntax
+
+The supported SQL is based on a subset of the standard SQL syntax. Sync Streams support more SQL features than the legacy Sync Rules.
+
+
+
+ - `SELECT` with column selection and [`WHERE` filtering](#filtering-where-clause)
+ - [Subqueries](/sync/streams/queries#using-subqueries) with `IN (SELECT ...)` and nested subqueries
+ - [`INNER JOIN`](#join-syntax) (selected columns must come from a single table)
+ - [Common Table Expressions (CTEs)](#cte-and-with-syntax) via the `with:` block
+ - Multiple queries per stream via `queries:`
+ - Table-valued functions such as `json_each()` for [expanding arrays](/sync/streams/parameters#expanding-json-arrays)
+ - `BETWEEN` and `CASE` expressions
+ - A limited set of [operators](#operators) and [functions](#functions)
+
+ **Not supported**: aggregation, sorting, or set operations (`GROUP BY`, `ORDER BY`, `LIMIT`, `UNION`, etc.). See [Writing Queries](/sync/streams/queries) for details.
+
+
+ - Simple `SELECT` with column selection
+ - `WHERE` filtering on parameters (see [Filtering: WHERE Clause](#filtering-where-clause))
+ - A limited set of [operators](#operators) and [functions](#functions)
+
+ **Not supported**: subqueries, JOINs, CTEs, aggregation, sorting, or set operations (`GROUP BY`, `ORDER BY`, `LIMIT`, `UNION`, etc.).
+
+
+
+## Filtering: WHERE Clause
+
+Sync queries support a subset of SQL `WHERE` syntax. Allowed operators and combinations differ between Sync Streams and Sync Rules, and are more restrictive than standard SQL.
+
+
+
+
+**`=` and `IS NULL`** — Compare a row column to a static value, a parameter, or another column:
+
+```sql
+-- Static value
+WHERE status = 'active'
+WHERE deleted_at IS NULL
+
+-- Parameter (auth, connection, or subscription)
+WHERE owner_id = auth.user_id()
+WHERE region = connection.parameter('region')
+```
+
+**`AND`** — Fully supported. You can mix parameter comparisons, subqueries, and row-value conditions in the same clause.
+
+```sql
+-- Two parameter conditions
+WHERE owner_id = auth.user_id()
+ AND org_id = auth.parameter('org_id')
+
+-- Parameter condition + row-value condition
+WHERE owner_id = auth.user_id()
+ AND status = 'active'
+
+-- Parameter condition + subquery
+WHERE list_id = subscription.parameter('list_id')
+ AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id())
+```
+
+**`OR`** — Supported, including `OR` nested inside `AND`. PowerSync rewrites combinations like `A AND (B OR C)` into separate branches before evaluating. Each `OR` branch must be a valid filter on its own; you cannot have a branch that only makes sense when combined with the other.
+
+```sql
+-- Top-level OR
+WHERE owner_id = auth.user_id()
+ OR shared_with = auth.user_id()
+
+-- OR nested inside AND
+WHERE status = 'active'
+ AND (owner_id = auth.user_id() OR shared_with = auth.user_id())
+```
+
+**`NOT`** — Supported for simple conditions on row values. `NOT IN` with a literal set of values is supported: use a JSON array string (e.g. `'["draft", "hidden"]'`), or the `ARRAY['draft', 'hidden']` and `ROW('draft', 'hidden')` forms. You cannot negate a subquery or a parameter array expansion.
+
+```sql
+-- Simple row-value conditions
+WHERE status != 'archived'
+WHERE deleted_at IS NOT NULL
+
+-- NOT IN with JSON array string (any of these forms)
+WHERE category NOT IN '["draft", "hidden"]'
+WHERE category NOT IN ARRAY['draft', 'hidden']
+WHERE category NOT IN ROW('draft', 'hidden')
+
+-- Not supported: negating a subquery
+-- WHERE issue_id NOT IN (SELECT id FROM issues WHERE owner_id = auth.user_id())
+
+-- Not supported: negating a parameter array
+-- WHERE id NOT IN subscription.parameter('excluded_ids')
+```
+
+
+
+
+**`=` and `IS NULL`** — Compare a row column to a static value or a bucket parameter:
+
+```sql
+-- Static value
+WHERE status = 'active'
+WHERE deleted_at IS NULL
+
+-- Bucket parameter
+WHERE owner_id = bucket.user_id
+```
+
+**`AND`** — Supported in both Parameter Queries and Data Queries. In Parameter Queries, each condition may match a different parameter. However, you cannot combine two `IN` expressions on parameters in the same `AND`; split them into separate Parameter Queries instead.
+
+```sql
+-- Supported: parameter condition + row-value condition
+WHERE users.id = request.user_id()
+ AND users.is_admin = true
+
+-- Not supported: two IN expressions on parameters in the same AND
+-- WHERE bucket.list_id IN lists.allowed_ids
+-- AND bucket.org_id IN lists.allowed_org_ids
+```
+
+**`OR`** — Supported when both sides of the `OR` reference the exact same set of parameters. If the two sides use different parameters, use separate parameter queries instead.
+
+```sql
+-- Supported: both sides reference the same parameter
+WHERE lists.owner_id = request.user_id()
+ OR lists.shared_with = request.user_id()
+
+-- Not supported: sides reference different parameters
+-- WHERE lists.owner_id = request.user_id()
+-- OR lists.org_id = bucket.org_id
+```
+
+**`NOT`** — Supported for simple row-value conditions. Not supported on parameter-matching expressions.
+
+```sql
+-- Supported
+WHERE status != 'archived'
+WHERE deleted_at IS NOT NULL
+WHERE NOT users.is_admin = true
+
+-- Not supported in parameter queries
+-- WHERE NOT users.id = request.user_id()
+```
+
+
+
+
+## Operators
+
+Operators can be used in `WHERE` clauses and in `SELECT` expressions. When filtering on parameters (e.g. `auth.user_id()`, `subscription.parameter('id')`), some combinations are restricted — see [Filtering: WHERE Clause](#filtering-where-clause).
+
+
+
+ - **Comparison:** `=`, `!=`, `<`, `>`, `<=`, `>=` — If either side is `null`, the result is `null`.
+ - **Null:** `IS NULL`, `IS NOT NULL`
+
+
+ - **Logical:** `AND`, `OR`, `NOT` — See [Filtering: WHERE Clause](#filtering-where-clause) for restrictions when filtering on parameters.
+ - **Mathematical:** `+`, `-`, `*`, `/`
+
+
+ - `||` — Joins two text values together.
+
+
+ - `json -> 'path'` - Returns the value as a JSON string.
+ - `json ->> 'path'` — Returns the extracted value.
+
+
+ - **Sync Streams:** `left IN right` — `left` can be a row column and `right` a parameter array (e.g. `id IN subscription.parameter('ids')`), or `left` a parameter and `right` a row JSON array column. Also supports subqueries: `id IN (SELECT ...)`.
+ - **Sync Rules:** Returns true if `left` is in the `right` JSON array. In Data Queries, `left` must be a row column and `right` cannot be a bucket parameter. In Parameter Queries, either side may be a parameter.
+
+
+ - `x BETWEEN a AND b`, `x NOT BETWEEN a AND b` — True if `x` is in the inclusive range `[a, b]`. Usable in `WHERE` or as a `SELECT` expression. If any operand is `null`, the result is `null`.
+
+ Example: `WHERE price BETWEEN 10 AND 100`
+
+ Supported in Sync Streams only. Not available in Sync Rules.
+
+
+ - ` && ` — True if the JSON array in `left` and the set `right` share at least one value. Use when the row stores an array (e.g. a `tagged_users` column). `left` must be a row column (JSON array); `right` must be a subquery or parameter array.
+
+ Example: `WHERE tagged_users && (SELECT id FROM org_members WHERE org_id = auth.parameter('org_id'))`
+
+ Use `IN` when the row has a single value to check against a set; use `&&` when the row has an array and you want to match any element.
+
+ Supported in Sync Streams only. Not available in Sync Rules.
+
+
+
+## Functions
+
+Functions can be used to transform columns/fields before being synced to a client. They operate on row data or parameters. Type names below (`text`, `integer`, `real`, `blob`, `null`) refer to [SQLite storage classes](https://www.sqlite.org/datatype3.html).
+
+Most functions are from [SQLite built-in functions](https://www.sqlite.org/lang_corefunc.html) and [SQLite JSON functions](https://www.sqlite.org/json1.html).
+
+
+
+ - **[upper(text)](https://www.sqlite.org/lang_corefunc.html#upper)** — Convert text to upper case.
+ - **[lower(text)](https://www.sqlite.org/lang_corefunc.html#lower)** — Convert text to lower case.
+ - **[substring(text, start, length)](https://www.sqlite.org/lang_corefunc.html#substr)** — Extracts a portion of a string based on specified start index and length. Start index is 1-based. Example: `substring(created_at, 1, 10)` returns the date portion of the timestamp.
+ - **[hex(data)](https://www.sqlite.org/lang_corefunc.html#hex)** — Convert blob or text data to hexadecimal text.
+ - **base64(data)** — Convert blob or text data to base64 text.
+ - **[length(data)](https://www.sqlite.org/lang_corefunc.html#length)** — For text, return the number of characters. For blob, return the number of bytes. For null, return null. For integer and real, convert to text and return the number of characters.
+
+
+ - `CAST(x AS type)` or `x :: type` — Cast to `text`, `numeric`, `integer`, `real`, or `blob`. See [Type mapping](/sync/types) and [SQLite types](https://www.sqlite.org/datatype3.html).
+ - **[typeof(data)](https://www.sqlite.org/lang_corefunc.html#typeof)** — Returns `text`, `integer`, `real`, `blob`, or `null`.
+
+
+ - **[json_each(data)](https://www.sqlite.org/json1.html#jeach)** — Expands a JSON array into rows.
+ - **Sync Streams:** Works with auth and connection parameters (e.g. `JOIN json_each(auth.parameter('ids')) AS t` or `WHERE id IN (SELECT value FROM json_each(auth.parameter('ids')))`). Can also be used with columns from joined tables in some cases (e.g. `SELECT * FROM lists WHERE id IN (SELECT lists.value FROM access_control a, json_each(a.allowed_lists) as lists WHERE a.user = auth.user_id())`). See [Expanding JSON arrays](/sync/streams/parameters#expanding-json-arrays).
+ - **Sync Rules:** Expands a JSON array or object from a request or token parameter into a set of parameter rows. Example: `SELECT value AS project_id FROM json_each(request.jwt() -> 'project_ids')`.
+ - **[json_extract(data, path)](https://www.sqlite.org/json1.html#jex)** — Same as `->>` operator, but the path must start with `$.`
+ - **[json_array_length(data)](https://www.sqlite.org/json1.html#jarraylen)** — Given a JSON array (as text), returns the length of the array. If data is null, returns null. If the value is not a JSON array, returns 0.
+ - **[json_valid(data)](https://www.sqlite.org/json1.html#jvalid)** — Returns 1 if the data can be parsed as JSON, 0 otherwise.
+ - **json_keys(data)** — Returns the set of keys of a JSON object as a JSON array. Example: `SELECT * FROM items WHERE bucket.user_id IN json_keys(permissions_json)`.
+
+
+ - **[ifnull(x, y)](https://www.sqlite.org/lang_corefunc.html#ifnull)** — Returns x if non-null, otherwise returns y.
+
+
+ - **[iif(x, y, z)](https://www.sqlite.org/lang_corefunc.html#iif)** — Returns y if x is true, otherwise returns z.
+
+
+ - **[unixepoch(time-value, [modifier])](https://www.sqlite.org/lang_datefunc.html)** — Returns a time-value as Unix timestamp. If modifier is "subsec", the result is a floating point number, with milliseconds included in the fraction. The time-value argument is required — this function cannot be used to get the current time.
+ - **[datetime(time-value, [modifier])](https://www.sqlite.org/lang_datefunc.html)** — Returns a time-value as a date and time string, in the format YYYY-MM-DD HH:MM:SS. If the specifier is "subsec", milliseconds are also included. If the modifier is "unixepoch", the argument is interpreted as a Unix timestamp. Both modifiers can be included: `datetime(timestamp, 'unixepoch', 'subsec')`. The time-value argument is required — this function cannot be used to get the current time.
+ - **[uuid_blob(id)](https://sqlite.org/src/file/ext/misc/uuid.c)** — Convert a UUID string to bytes.
+
+
+ - **[ST_AsGeoJSON(geometry)](/client-sdks/advanced/gis-data-postgis)** — Convert [PostGIS](/client-sdks/advanced/gis-data-postgis) (in Postgres) geometry from WKB to GeoJSON. Combine with JSON operators to extract specific fields.
+ - **[ST_AsText(geometry)](/client-sdks/advanced/gis-data-postgis)** — Convert [PostGIS](/client-sdks/advanced/gis-data-postgis) (in Postgres) geometry from WKB to Well-Known Text (WKT).
+ - **[ST_X(point)](/client-sdks/advanced/gis-data-postgis)** — Get the X coordinate of a [PostGIS](/client-sdks/advanced/gis-data-postgis) point (in Postgres).
+ - **[ST_Y(point)](/client-sdks/advanced/gis-data-postgis)** — Get the Y coordinate of a [PostGIS](/client-sdks/advanced/gis-data-postgis) point (in Postgres).
+
+
+
+If you need an operator or function not listed, [contact us](/resources/contact-us) so we can consider adding it.
+
+## JOIN Syntax
+
+Supported in Sync Streams only. Not available in Sync Rules.
+
+Sync Streams support a subset of join syntax. The following rules define what is valid:
+
+- **Only inner joins:** Use `JOIN` or `INNER JOIN`. `LEFT`, `RIGHT`, and `OUTER` joins are not supported.
+- **Single output table:** All selected columns must come from one table. Use `table.*` or list columns from that table (e.g. `comments.*`, `comments.id`). Selecting columns from multiple tables is invalid.
+- **Simple join conditions:** Join conditions must be equality comparisons of the form `table1.column = table2.column`. Other comparisons (e.g. `a.x > b.y`) are not supported.
+- **Table-valued functions in JOINs:** `json_each()` and similar functions work with auth or connection parameters (e.g. `json_each(auth.parameter('ids'))`). They can also be used with columns from joined tables in some cases.
+
+```sql
+-- Valid: columns from one table
+SELECT comments.* FROM comments INNER JOIN issues ON comments.issue_id = issues.id
+
+-- Invalid: columns from multiple tables
+SELECT comments.*, issues.title FROM comments JOIN issues ON comments.issue_id = issues.id
+
+-- Invalid: non-equality join condition
+SELECT * FROM a JOIN b ON a.x > b.y
+```
+
+For how to use JOINs in your stream queries (when to use them, patterns, and examples), see [Using Joins](/sync/streams/queries#using-joins).
+
+## CTE and WITH Syntax
+
+Supported in Sync Streams only. Not available in Sync Rules.
+
+Common Table Expressions (CTEs) are defined in a `with:` block inside a stream. Each CTE is a name and a single `SELECT` query. The following rules apply:
+
+- **CTEs cannot reference other CTEs.** Each CTE must be self-contained. To chain logic (e.g. orgs → projects), use nested subqueries in your stream query and reference only the CTE at the leaf level.
+- **CTE names take precedence over table names.** If a CTE has the same name as a database table, the CTE is used. Use distinct names to avoid confusion.
+- **Short-hand `IN cte_name`** works only when the CTE has exactly one column.
+
+```yaml
+# Valid: CTE in a stream
+streams:
+ projects:
+ with:
+ user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+ query: SELECT * FROM projects WHERE org_id IN user_orgs
+
+# Invalid: CTE referencing another CTE
+# streams:
+# my_stream:
+# with:
+# user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id()
+# project_ids: SELECT id FROM projects WHERE org_id IN user_orgs # Error
+```
+
+For how to use CTEs, see [Common Table Expressions (CTEs)](/sync/streams/ctes).
+
+## CASE Expressions
+
+Supported in Sync Streams only. Not available in Sync Rules.
+
+`CASE` is allowed anywhere an expression is allowed — in `SELECT` columns or `WHERE` clauses.
+
+**Searched CASE** — Each `WHEN` is an independent boolean condition:
+
+```sql
+CASE
+ WHEN THEN
+ WHEN THEN
+ ELSE
+END
+```
+
+```sql
+-- Compute a label based on a column value
+SELECT id,
+ CASE
+ WHEN score >= 90 THEN 'A'
+ WHEN score >= 70 THEN 'B'
+ ELSE 'C'
+ END AS grade
+FROM results
+```
+
+**Simple CASE** — Compares one expression against a list of values:
+
+```sql
+CASE
+ WHEN THEN
+ WHEN THEN
+ ELSE
+END
+```
+
+```sql
+-- Map numeric status codes to readable labels
+SELECT id,
+ CASE status
+ WHEN 1 THEN 'pending'
+ WHEN 2 THEN 'active'
+ WHEN 3 THEN 'closed'
+ ELSE 'unknown'
+ END AS status_label
+FROM tasks
+```
+
+`ELSE` is optional. If omitted and no `WHEN` matches, the result is `null`.
diff --git a/sync/types.mdx b/sync/types.mdx
index 8d415a37..d5b8b022 100644
--- a/sync/types.mdx
+++ b/sync/types.mdx
@@ -1,9 +1,11 @@
---
title: "Types"
sidebarTitle: "Type Mapping"
-description: "PowerSync's Sync Rules and Sync Streams use the [SQLite type system](https://www.sqlite.org/datatype3.html)."
+description: "PowerSync's Sync Streams and Sync Rules use the [SQLite type system](https://www.sqlite.org/datatype3.html)."
---
+import BinaryType from '/snippets/binary-type.mdx';
+
The supported client-side SQLite types are:
1. `null`
@@ -19,33 +21,29 @@ Postgres types are mapped to SQLite types as follows:
| Postgres Data Type | PowerSync / SQLite Column Type | Notes |
|--------------------|--------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
-| text, varchar | text | |
-| int2, int4, int8 | integer | |
-| numeric / decimal | text | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite |
-| bool | integer | 1 for true, 0 for false |
-| float4, float8 | real | |
-| enum | text | |
-| uuid | text | |
-| timestamptz | text | Format: `YYYY-MM-DD hh:mm:ss.sssZ`. This is compatible with ISO8601 and SQLite's functions. Precision matches the precision used in Postgres. `-infinity` becomes `0000-01-01 00:00:00Z` and `infinity` becomes `9999-12-31 23:59:59Z`. |
-| timestamp | text | Format: `YYYY-MM-DD hh:mm:ss.sss`. In most cases, timestamptz should be used instead. `-infinity` becomes `0000-01-01 00:00:00` and `infinity` becomes `9999-12-31 23:59:59`. |
-| date, time | text | |
-| json, jsonb | text | There is no dedicated JSON type — JSON functions operate directly on text values. |
-| interval | text | |
-| macaddr | text | |
-| inet | text | |
-| bytea | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/rules/supported-sql). |
-| geometry (PostGIS) | text | hex string of the binary data Use the [ST functions](/sync/rules/supported-sql#functions) to convert to other formats |
-| Arrays | text | JSON array. |
-| `DOMAIN` types | text / depends | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), inner type or raw wire representation (legacy). |
-| Custom types | text | Dependig on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object or raw wire representation (legacy). |
-| (Multi-)ranges | text | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object (array for multi-ranges) or raw wire representation (legacy). |
-
-
-There is no dedicated boolean data type. Boolean values are represented as `1` (true) or `0` (false).
-
-Binary data in Postgres can be accessed in Sync Rules and Sync Streams, but cannot be synced directly to clients: it needs to be converted to hex or Base64 first — see below), and cannot be used as bucket parameters.
-
-`json` and `jsonb` values are treated as `text` values in their serialized representation. JSON functions and operators operate directly on these `text` values.
+| `text`, `varchar` | `text` | |
+| `int2`, `int4`, `int8` | `integer` | |
+| `numeric` / `decimal` | `text` | These types have arbitrary precision in Postgres, so can only be represented accurately as text in SQLite |
+| `bool` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. |
+| `float4`, `float8` | `real` | |
+| `enum` | `text` | |
+| `uuid` | `text` | |
+| `timestamptz` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ`. This is compatible with ISO8601 and SQLite's functions. Precision matches the precision used in Postgres. `-infinity` becomes `0000-01-01 00:00:00Z` and `infinity` becomes `9999-12-31 23:59:59Z`. |
+| `timestamp` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sss`. In most cases, timestamptz should be used instead. `-infinity` becomes `0000-01-01 00:00:00` and `infinity` becomes `9999-12-31 23:59:59`. |
+| `date`, `time` | `text` | |
+| `json`, `jsonb` | `text` | `json` and `jsonb` values are treated as `text` values in their serialized representation. [JSON functions and operators](/sync/supported-sql#operators) operate directly on these `text` values. |
+| `interval` | `text` | |
+| `macaddr` | `text` | |
+| `inet` | `text` | |
+| `bytea` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). |
+| `geometry` (PostGIS) | `text` | Hex string of the binary data. Use the [ST functions](/sync/supported-sql#functions) to convert to other formats |
+| Arrays | `text` | JSON array. |
+| `DOMAIN` types | `text` / depends | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), inner type or raw wire representation (legacy). |
+| Custom types | `text` | Dependig on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object or raw wire representation (legacy). |
+| (Multi-)ranges | `text` | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object (array for multi-ranges) or raw wire representation (legacy). |
+
+
+
## MongoDB Type Mapping
@@ -54,32 +52,33 @@ MongoDB types are mapped to SQLite types as follows:
| BSON Type | PowerSync / SQLite Column Type | Notes |
|--------------------|--------------------------------|------------------------------------------------------------------------------------------------------------------------------------------|
-| String | text | |
-| Int, Long | integer | |
-| Double | real | |
-| Decimal128 | text | |
-| Object | text | Converted to a JSON string |
-| Array | text | Converted to a JSON string |
-| ObjectId | text | Lower-case hex string |
-| UUID | text | Lower-case hex string |
-| Boolean | integer | 1 for true, 0 for false |
-| Date | text | Format: `YYYY-MM-DD hh:mm:ss.sssZ` |
-| Null | null | |
-| Binary | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/rules/supported-sql). |
-| Regular Expression | text | JSON text in the format `{"pattern":"...","options":"..."}` |
-| Timestamp | integer | Converted to a 64-bit integer |
-| Undefined | null | |
-| DBPointer | text | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` |
-| JavaScript | text | JSON text in the format `{"code": "...", "scope": ...}` |
-| Symbol | text | |
-| MinKey, MaxKey | null | |
+| `String` | `text` | |
+| `Int`, `Long` | `integer` | |
+| `Double` | `real` | |
+| `Decimal128` | `text` | |
+| `Object` | `text` | Converted to a JSON string |
+| `Array` | `text` | Converted to a JSON string |
+| `ObjectId` | `text` | Lower-case hex string |
+| `UUID` | `text` | Lower-case hex string |
+| `Boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. |
+| `Date` | `text` | Format: `YYYY-MM-DD hh:mm:ss.sssZ` |
+| `Null` | `null` | |
+| `Binary` | `blob` | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). |
+| Regular Expression | `text` | JSON text in the format `{"pattern":"...","options":"..."}` |
+| `Timestamp` | `integer` | Converted to a 64-bit integer |
+| `Undefined` | `null` | |
+| `DBPointer` | `text` | JSON text in the format `{"collection":"...","oid":"...","db":"...","fields":...}` |
+| `JavaScript` | `text` | JSON text in the format `{"code": "...", "scope": ...}` |
+| `Symbol` | `text` | |
+| `MinKey`, `MaxKey` | `null` | |
* Data is converted to a flat list of columns, one column per top-level field in the MongoDB document.
-* Special BSON types are converted to plain SQLite alternatives.
-* For example, `ObjectId`, `Date`, `UUID` are all converted to a plain `TEXT` column.
-* Nested objects and arrays are converted to JSON arrays, and JSON operators can be used to query them (in the Sync Rules and/or on the client-side).
+* Special BSON types are converted to plain SQLite alternatives. For example, `ObjectId`, `Date`, `UUID` are all converted to a plain `TEXT` column.
+* Nested objects and arrays are converted to JSON, and [JSON functions and operators](/sync/supported-sql#operators) can be used to query them (in the Sync Streams / Sync Rules and/or on the client-side SQLite statements).
* Binary data nested in objects or arrays is not supported.
+
+
## MySQL (Beta) Type Mapping
@@ -87,31 +86,28 @@ MySQL types are mapped to SQLite types as follows:
| MySQL Data Type | PowerSync / SQLite Column Type | Notes |
|----------------------------------------------------|--------------------------------|-----------------------------------------------------------------------------------|
-| tinyint, smallint, mediumint, bigint, integer, int | integer | |
-| numeric, decimal | text | |
-| bool, boolean | integer | 1 for true, 0 for false |
-| float, double, real | real | |
-| enum | text | |
-| set | text | Converted to JSON array |
-| char, varchar | text | |
-| tinytext, text, mediumtext, longtext | text | |
-| timestamp | text | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
-| date | text | Format: `YYYY-MM-DD` |
-| time, datetime | text | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
-| year | text | |
-| json | text | There is no dedicated JSON type — JSON functions operate directly on text values. |
-| bit | blob | * See note below regarding syncing binary types |
-| binary, varbinary | blob | |
-| image | blob | |
-| geometry, geometrycollection | blob | |
-| point, multipoint | blob | |
-| linestring, multilinestring | blob | |
-| polygon, multipolygon | blob | |
-
-
- Binary data can be accessed in the Sync Rules, but cannot be used as bucket parameters. Before it can be synced directly to clients it needs to be converted to hex or base64 first.
- See [Operators & Functions](/sync/rules/supported-sql)
-
+| `tinyint`, `smallint`, `mediumint`, `bigint`, `integer`, `int` | `integer` | |
+| `numeric`, `decimal` | `text` | |
+| `bool`, `boolean` | `integer` | `1` for true, `0` for false. There is no dedicated boolean data type in SQLite. |
+| `float`, `double`, `real` | `real` | |
+| `enum` | `text` | |
+| `set` | `text` | Converted to JSON array |
+| `char`, `varchar` | `text` | |
+| `tinytext`, `text`, `mediumtext`, `longtext` | `text` | |
+| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
+| `date` | `text` | Format: `YYYY-MM-DD` |
+| `time`, `datetime` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
+| `year` | `text` | |
+| `json` | `text` | There is no dedicated JSON type in SQLite — JSON functions operate directly on text values. |
+| `bit` | `blob` | * See note below regarding syncing binary types |
+| `binary`, `varbinary` | `blob` | |
+| `image` | `blob` | |
+| `geometry`, `geometrycollection` | `blob` | |
+| `point`, `multipoint` | `blob` | |
+| `linestring`, `multilinestring` | `blob` | |
+| `polygon`, `multipolygon` | `blob` | |
+
+
## SQL Server (Alpha) Type Mapping
@@ -120,25 +116,23 @@ SQL Server types are mapped to SQLite types as follows:
| SQL Server Data Type | PowerSync / SQLite Column Type | Notes |
|----------------------------------------------------|--------------------------------|--------------------------------------------------------|
-| tinyint, smallint, int, bigint | integer | |
-| numeric, decimal | text | Numeric string |
-| float, real | real | |
-| bit | integer | |
-| money, smallmoney | text | Numeric string |
-| xml | text | |
-| char, nchar, ntext | text | |
-| varchar, nvarchar, text | text | |
-| uniqueidentifier | text | |
-| timestamp | text | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
-| date | text | Format: `YYYY-MM-DD` |
-| time | text | Format: `HH:mm:ss.sss` |
-| datetime, datetime2, smalldatetime, datetimeoffset | text | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
-| json | text | Only exists for Azure SQL Database and SQL Server 2025 |
-| geometry, geography | text | Text of JSON object describing the spatial data type |
-| binary, varbinary, image | blob | * See note below regarding binary types |
-| rowversion, timestamp | blob | * See note below regarding binary types |
-| User Defined Types: hiearchyid | blob | * See note below regarding binary types |
-
-
- Binary data can be accessed in the Sync Rules, but cannot be used as bucket parameters. Before it can be synced directly to clients it needs to be converted to hex or Base64 first. See [Operators & Functions](/sync/rules/supported-sql)
-
+| `tinyint`, `smallint`, `int`, `bigint` | `integer` | |
+| `numeric`, `decimal` | `text` | Numeric string |
+| `float`, `real` | `real` | |
+| `bit` | `integer` | |
+| `money`, `smallmoney` | `text` | Numeric string |
+| `xml` | `text` | |
+| `char`, `nchar`, `ntext` | `text` | |
+| `varchar`, `nvarchar`, `text` | `text` | |
+| `uniqueidentifier` | `text` | |
+| `timestamp` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
+| `date` | `text` | Format: `YYYY-MM-DD` |
+| `time` | `text` | Format: `HH:mm:ss.sss` |
+| `datetime`, `datetime2`, `smalldatetime`, `datetimeoffset` | `text` | ISO 8601 format: `YYYY-MM-DDTHH:mm:ss.sssZ` |
+| `json` | `text` | Only exists for Azure SQL Database and SQL Server 2025 |
+| `geometry`, `geography` | `text` | `text` of JSON object describing the spatial data type |
+| `binary`, `varbinary`, `image` | `blob` | * See note below regarding binary types |
+| `rowversion`, `timestamp` | `blob` | * See note below regarding binary types |
+| User Defined Types: `hiearchyid` | `blob` | * See note below regarding binary types |
+
+
diff --git a/tools/cli.mdx b/tools/cli.mdx
index c4c83982..391c275e 100644
--- a/tools/cli.mdx
+++ b/tools/cli.mdx
@@ -47,7 +47,7 @@ npm
### Deploying Sync Rules with GitHub Actions
-You can automate sync rule deployments using the PowerSync CLI in your CI/CD pipeline. This is useful for ensuring your sync rules are automatically deployed whenever changes are pushed to a repository.
+You can automate Sync Rule deployments using the PowerSync CLI in your CI/CD pipeline. This is useful for ensuring your Sync Rules are automatically deployed whenever changes are pushed to a repository.
-See a complete example of deploying sync rules with GitHub Actions
+See a complete example of deploying Sync Rules with GitHub Actions
The example repository demonstrates how to:
-* Set up a GitHub Actions workflow to deploy sync rules on push to the `main` branch
+* Set up a GitHub Actions workflow to deploy Sync Rules on push to the `main` branch
* Configure required repository secrets (`POWERSYNC_AUTH_TOKEN`, `POWERSYNC_INSTANCE_ID`, `POWERSYNC_PROJECT_ID`, `POWERSYNC_ORG_ID`)
* Automatically deploy `sync-rules.yaml` changes
diff --git a/tools/local-development.mdx b/tools/local-development.mdx
index 71baf68b..d55b2663 100644
--- a/tools/local-development.mdx
+++ b/tools/local-development.mdx
@@ -103,13 +103,15 @@ storage:
# The port which the PowerSync API server will listen on
port: 8080
-# Specify sync rules
-sync_rules:
- # TODO use specific sync rules here
+# Specify sync config (Sync Streams recommended for new projects)
+sync_config:
content: |
- bucket_definitions:
- global:
- data:
+ config:
+ edition: 3
+ streams:
+ shared_data:
+ auto_subscribe: true
+ queries:
- SELECT * FROM lists
- SELECT * FROM todos
diff --git a/tools/powersync-dashboard.mdx b/tools/powersync-dashboard.mdx
index 1d8c4c0a..37d09b47 100644
--- a/tools/powersync-dashboard.mdx
+++ b/tools/powersync-dashboard.mdx
@@ -69,9 +69,9 @@ When you navigate to a specific instance, you'll see a left sidebar with various
- **Health** - Overview of its connection health, deploy history, replication status, and recently connected clients
- **Database Connections** - Configure and manage the source database connection
- **Client Auth** - Configure authentication settings
-- **Sync Rules** - Edit, validate, and deploy your Sync Rules
-- **Sync Test** - Test your Sync Rules configuration
-- **Client SDK Setup** - Generate the [client-side schema](/intro/setup-guide#define-your-client-side-schema) based off your [Sync Rules](/sync/rules/overview)
+- **Sync Streams / Sync Rules** - Edit, validate, and deploy your sync config.
+- **Sync Test** - Test your Sync Streams (or legacy Sync Rules)
+- **Client SDK Setup** - Generate the [client-side schema](/intro/setup-guide#define-your-client-side-schema) based on your deployed [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview)
- **Write API** - Resources for exposing the write API endpoint
- **Logs** - View replication and service logs
- **Metrics** - Monitor usage metrics and performance
@@ -87,11 +87,11 @@ In the top bar, you'll see a "Connect" button that provides quick access to your
Here are some of the most common tasks you'll perform in the dashboard:
-- **Edit and deploy Sync Rules** - Select your project and instance and go to the **Sync Rules** view to edit your Sync Rules, then click **"Validate"** and **"Deploy"** to deploy them
+- **Edit and deploy Sync Streams / Sync Rules** - Select your project and instance and go to the **Sync Streams** (or legacy **Sync Rules**) view to edit your sync config, then click **"Validate"** and **"Deploy"** to deploy
- **Generate development token** - Navigate to the **Client Auth** and ensure the **Development tokens** setting is checked. Click the "Connect" button in the top bar and follow instructions to generate a [development token](/configuration/auth/development-tokens).
- **Launch the Sync Diagnostics Client** - Navigate to the **Sync Test**, generate a development token and click "Launch" to launch the [Sync Diagnostics Client](/tools/diagnostics-client).
- **Copy your instance URL** - Click **Connect** in the top bar and copy the instance URL from the dialog.
-- **Generate client-side schema** - Click the **Connect** button in the top bar to generate the [client-side schema](/intro/setup-guide#define-your-client-side-schema) based on your [Sync Rules](/sync/rules/overview) in your preferred language
+- **Generate client-side schema** - Click the **Connect** button in the top bar to generate the [client-side schema](/intro/setup-guide#define-your-client-side-schema) based on your deployed [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) in your preferred language
- **Monitor instance health** - Navigate to the **Health** view to see an overview of your instance status, database connections, and recent deploys
- **View logs** - Navigate to the **Logs** view to review replication and client sync logs
- **Monitor metrics** - Navigate to the **Metrics** view to track usage metrics
| | | |