Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -1049,6 +1049,10 @@ export const database: NavMenuConstant = {
name: 'Partitioning your tables',
url: '/guides/database/partitions' as `/${string}`,
},
{
name: 'Migrating to pg_partman',
url: '/guides/database/migrating-to-pg-partman' as `/${string}`,
},
{
name: 'Managing connections',
url: '/guides/database/connection-management' as `/${string}`,
Expand Down Expand Up @@ -1249,6 +1253,10 @@ export const database: NavMenuConstant = {
name: 'pg_net: Async Networking',
url: '/guides/database/extensions/pg_net' as `/${string}`,
},
{
name: 'pg_partman: Partition management',
url: '/guides/database/extensions/pg_partman' as `/${string}`,
},
{
name: 'pg_plan_filter: Restrict Total Cost',
url: '/guides/database/extensions/pg_plan_filter' as `/${string}`,
Expand Down
95 changes: 95 additions & 0 deletions apps/docs/content/guides/database/extensions/pg_partman.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
id: 'pg_partman'
title: 'pg_partman: partition management'
description: 'Automated partition management'
---

[`pg_partman`](https://github.com/pgpartman/pg_partman) is a Postgres extension that automates the creation and maintenance of partitions for tables using Postgres native partitioning.

## Enable the extension

To enable `pg_partman`, create a dedicated schema for it and enable the extension there.

{/* prettier-ignore */}
```sql
create schema if not exists partman;
create extension if not exists pg_partman with schema partman;
```

## Create a partitioned table

`pg_partman` requires your parent table to already be declared as a partitioned table.

{/* prettier-ignore */}
```sql
create table public.messages (
id bigint generated by default as identity,
sent_at timestamptz not null,
sender_id uuid,
recipient_id uuid,
body text,
primary key (sent_at, id)
)
partition by range (sent_at);
```

## Set up partitioning

You configure the parent table using `partman.create_parent()`. The function takes an `ACCESS EXCLUSIVE` lock briefly while it creates the initial partitions.

### Time-based partitions

{/* prettier-ignore */}
```sql
select partman.create_parent(
p_parent_table := 'public.messages',
p_control := 'sent_at',
p_type := 'range',
p_interval := '7 days',
p_premake := 7,
p_start_partition := '2025-01-01 00:00:00'
);
```

### Integer-based partitions

{/* prettier-ignore */}
```sql
create table public.events (
id bigint generated by default as identity,
inserted_at timestamptz not null default now(),
payload jsonb,
primary key (id)
)
partition by range (id);

select partman.create_parent(
p_parent_table := 'public.events',
p_control := 'id',
p_type := 'range',
p_interval := '100000'
);
```

## Running maintenance

It’s important to call `pg_partman` maintenance regularly so future partitions are pre-created and retention policies are applied.

{/* prettier-ignore */}
```sql
call partman.run_maintenance_proc();
```

To automate this, schedule it using `pg_cron`.

{/* prettier-ignore */}
```sql
create extension if not exists pg_cron;

select
cron.schedule('@hourly', $$call partman.run_maintenance_proc()$$);
```

## Resources

- Official [pg_partman documentation](https://github.com/pgpartman/pg_partman/blob/development/doc/pg_partman.md)
4 changes: 4 additions & 0 deletions apps/docs/content/guides/database/extensions/timescaledb.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@ description: 'Scalable time-series data storage and analysis'

The `timescaledb` extension is deprecated in projects using Postgres 17. It continues to be supported in projects using Postgres 15, but will need to dropped before those projects are upgraded to Postgres 17. See the [Upgrading to Postgres 17 notes](/docs/guides/platform/upgrading#upgrading-to-postgres-17) for more information.

If you are using hypertables, follow the [migration guide](/docs/guides/database/migrating-to-pg-partman) to convert to native partitioning managed by `pg_partman`.

For additional support, contact our Success team by creating a support ticket in the Supabase Dashboard.

</Admonition>

[`timescaledb`](https://docs.timescale.com/timescaledb/latest/) is a Postgres extension designed for improved handling of time-series data. It provides a scalable, high-performance solution for storing and querying time-series data on top of a standard Postgres database.
Expand Down
113 changes: 113 additions & 0 deletions apps/docs/content/guides/database/migrating-to-pg-partman.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,113 @@
---
id: 'migrating-from-timescaledb-to-pg-partman'
title: 'Migrate from TimescaleDB to pg_partman'
description: 'Convert TimescaleDB hypertables to Postgres native partitions managed by pg_partman.'
---

Starting from Postgres 17, Supabase projects do not have the `timescaledb` extension available. If your project relies on TimescaleDB hypertables, you will need to migrate to standard Postgres tables before upgrading.

This guide shows one approach to migrate a hypertable to a native Postgres partitioned table and optionally configure `pg_partman` to automate ongoing partition maintenance.
The approach outlined in this guide can also be used for traditional partitioned tables.

## Before you begin

- Test the migration path in a staging environment (for example by creating a copy of your production project or using branching).
- Review your application for TimescaleDB-specific SQL usage (for example `time_bucket()`, compression policies). Those features are not provided by `pg_partman`.

## Migration overview

1. Create a new partitioned table.
2. Copy data from the hypertable to the new table.
3. Swap over and drop the hypertable.
4. Configure `pg_partman` (optional) and schedule maintenance.

## Example: Migrate `messages` from hypertable to native partitions

This example assumes a `messages` hypertable partitioned by `sent_at`.

### 1. Rename the existing hypertable

This keeps the original data in place while you create a new partitioned table with the original name.

{/* prettier-ignore */}
```sql
alter table public.messages rename to ht_messages;
```

### 2. Create a new partitioned table

When using native partitioning, the partitioning column must be included in any unique index (including the primary key).

{/* prettier-ignore */}
```sql
create table public.messages (
like public.ht_messages including all,
primary key (sent_at, id)
)
partition by range (sent_at);
```

### 3. Copy data into the new table

For large tables, consider copying in batches (for example by time range) during a maintenance window.

{/* prettier-ignore */}
```sql
insert into public.messages
select *
from public.ht_messages;
```

### 4. Drop the old hypertable (and TimescaleDB)

Only drop the extension once you’ve migrated all hypertables and no other objects depend on it.

{/* prettier-ignore */}
```sql
drop table public.ht_messages;

drop extension if exists timescaledb;
```

### 5. Configure `pg_partman` (optional)

Enable `pg_partman` and register your table so partitions are created ahead of time.

{/* prettier-ignore */}
```sql
create schema if not exists partman;
create extension if not exists pg_partman with schema partman;

select partman.create_parent(
p_parent_table := 'public.messages',
p_control := 'sent_at',
p_type := 'range',
p_interval := '7 days',
p_premake := 7,
p_start_partition := '2025-01-01 00:00:00'
);
```

## Keep partitions up to date

`pg_partman` requires running maintenance to pre-make partitions and apply retention policies.

{/* prettier-ignore */}
```sql
call partman.run_maintenance_proc();
```

To automate this, schedule it with `pg_cron`.

{/* prettier-ignore */}
```sql
create extension if not exists pg_cron;

select cron.schedule('@daily', $$call partman.run_maintenance_proc()$$);
```

## Additional resources

- [Partitioning your tables](/docs/guides/database/partitions).
- [`pg_partman` documentation](/docs/guides/database/extensions/pg_partman)
- [`pg_partman` migration guides](https://github.com/pgpartman/pg_partman/blob/development/doc/migrate_to_partman.md)
54 changes: 27 additions & 27 deletions apps/docs/content/guides/realtime/broadcast.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,22 @@ You can use Realtime Broadcast to send low-latency messages between users. Messa

The way Broadcast works changes based on the channel you are using:

- From REST API will receive an HTTP request which then will be sent via WebSocket to connected clients
- From Client libraries we have an established WebSocket connection and we use that to send a message to the server which then will be sent via WebSocket to connected clients
- From Database we add a new entry to `realtime.messages` where we have logical replication set to listen for changes which then will be sent via WebSocket to connected clients
- **REST API**: Receives an HTTP request and then sends a message via WebSocket to connected clients
- **Client libraries**: Sends a message via WebSocket to the server, and then the server sends a message via WebSocket to connected clients
- **Database**: Adds a new entry to `realtime.messages` where a logical replication is set to listen for changes, and then sends a message via WebSocket to connected clients

<Admonition type="note">

The public flag (the last argument in `realtime.send(payload, event, topic, is_private))` only affects who can subscribe to the topic not who can read messages from the database.
The public flag (the last argument in `realtime.send(payload, event, topic, is_private)`) only affects who can subscribe to the topic not who can read messages from the database.

- Public (false) → Anyone can subscribe to that topic without authentication
- Private (true) → Only authenticated clients can subscribe to that topic
- Public (`false`) → Anyone can subscribe to that topic without authentication
- Private (`true`) → Only authenticated clients can subscribe to that topic

However, regardless of whether it's public or private, the Realtime service connects to your database as the authenticated Supabase Admin role.
Regardless if it's public or private, the Realtime service connects to your database as the authenticated Supabase Admin role.

</Admonition>

For Authorization we insert a message and try to read it, and rollback the transaction to verify that the RLS policies set by the user are being respected by the user joining the channel, but this message isn't sent to the user. You can read more about it in the [Authorization docs](/docs/guides/realtime/authorization).
For Authorization, we insert a message and try to read it, and rollback the transaction to verify that the Row Level Security (RLS) policies set by the user are being respected by the user joining the channel, but this message isn't sent to the user. You can read more about it in [Authorization](/docs/guides/realtime/authorization).

## Subscribe to messages

Expand Down Expand Up @@ -132,9 +132,9 @@ In most cases, you can get the correct key from [the Project's **Connect** dialo
</$Show>
</Tabs>

### Receiving Broadcast messages
### Receive Broadcast messages

You can provide a callback for the `broadcast` channel to receive messages. This example will receive any `broadcast` messages that are sent to `test-channel`:
You can receive Broadcast messages by providing a callback to the channel.

<Tabs
scrollable
Expand Down Expand Up @@ -448,13 +448,13 @@ select

<Admonition type="note">

The realtime.send function in the database includes a flag that determines whether the broadcast is private or public, and client channels also have the same configuration. For broadcasts to work correctly, these settings must match a public broadcast will only reach public channels, and a private broadcast will only reach private ones.
The `realtime.send()` function in the database includes a flag that determines whether the broadcast is private or public, and client channels also have the same configuration. For broadcasts to work correctly, these settings must match. A public broadcast only reaches public channels and a private broadcast only reaches private channels.

By default, all database broadcasts are private, meaning clients must authenticate to receive them. If the database sends a public message but the client subscribes to a private channel, the message won't be delivered since private channels only accept signed, authenticated messages.
By default, all database broadcasts are private, meaning clients must authenticate to receive them. If the database sends a public message but the client subscribes to a private channel, the message is not delivered because private channels only accept signed, authenticated messages.

</Admonition>

It's a common use case to broadcast messages when a record is created, updated, or deleted. We provide a helper function specific to this use case, `realtime.broadcast_changes()`. For more details, check out the [Subscribing to Database Changes](/docs/guides/realtime/subscribing-to-database-changes) guide.
You can use the `realtime.broadcast_changes()` helper function to broadcast messages when a record is created, updated, or deleted. For more details, read [Subscribing to Database Changes](/docs/guides/realtime/subscribing-to-database-changes).

### Broadcast using the REST API

Expand Down Expand Up @@ -685,7 +685,7 @@ You can pass configuration options while initializing the Supabase Client.
>
<TabPanel id="js" label="JavaScript">

You can confirm that the Realtime servers have received your message by setting Broadcast's `ack` config to `true`.
You can confirm that the Realtime servers have received your message by setting Broadcast's `ack` setting to `true`.

{/* prettier-ignore */}
```js
Expand Down Expand Up @@ -881,20 +881,20 @@ You can also send a Broadcast message by making an HTTP request to Realtime serv

### How it works

Broadcast Changes allows you to trigger messages from your database. To achieve it Realtime is directly reading your WAL (Write Append Log) file using a publication against the `realtime.messages` table so whenever a new insert happens a message is sent to connected users.
Broadcast Changes allows you to trigger messages from your database. To achieve it, Realtime directly reads your Write-Ahead Log (WAL) file using a publication against the `realtime.messages` table. Whenever a new insert occurs, a message is sent to connected users.

It uses partitioned tables per day which allows the deletion your previous messages in a performant way by dropping the physical tables of this partitioned table. Tables older than 3 days old are deleted.
It uses partitioned tables per day, which allows performant deletion of your previous messages by dropping the physical tables of this partitioned table. Tables older than 3 days are deleted.

Broadcasting from the database works like a client-side broadcast, using WebSockets to send JSON packages. [Realtime Authorization](/docs/guides/realtime/authorization) is required and enabled by default to protect your data.
Broadcasting from the database works like a client-side broadcast, using WebSockets to send JSON payloads. [Realtime Authorization](/docs/guides/realtime/authorization) is required and enabled by default to protect your data.

The database broadcast feature provides two functions to help you send messages:
Broadcast Changes provides two functions to help you send messages:

- `realtime.send` will insert a message into realtime.messages without a specific format.
- `realtime.broadcast_changes` will insert a message with the required fields to emit database changes to clients. This helps you set up triggers on your tables to emit changes.
- `realtime.send()` inserts a message into `realtime.messages` without a specific format.
- `realtime.broadcast_changes()` inserts a message with the required fields to emit database changes to clients. This helps you set up triggers on your tables to emit changes.

### Broadcasting a message from your database

The `realtime.send` function provides the most flexibility by allowing you to broadcast messages from your database without a specific format. This allows you to use database broadcast for messages that aren't necessarily tied to the shape of a Postgres row change.
The `realtime.send()` function provides the most flexibility by allowing you to broadcast messages from your database without a specific format. This allows you to use database broadcast for messages that aren't necessarily tied to the shape of a Postgres row change.

```sql
SELECT realtime.send (
Expand All @@ -909,7 +909,7 @@ SELECT realtime.send (

#### Setup realtime authorization

Realtime Authorization is required and enabled by default. To allow your users to listen to messages from topics, create an RLS (Row Level Security) policy:
Realtime Authorization is required and enabled by default. To allow your users to listen to messages from topics, create an RLS policy:

```sql
CREATE POLICY "authenticated can receive broadcasts"
Expand All @@ -920,13 +920,13 @@ USING ( true );

```

See the [Realtime Authorization](/docs/guides/realtime/authorization) docs to learn how to set up more specific policies.
Read [Realtime Authorization](/docs/guides/realtime/authorization) to learn how to set up more specific policies.

#### Set up trigger function

First, set up a trigger function that uses `realtime.broadcast_changes` to insert an event whenever it is triggered. The event is set up to include data on the schema, table, operation, and field changes that triggered it.
First, set up a trigger function that uses the `realtime.broadcast_changes()` function to insert an event whenever it is triggered. The event is set up to include data on the schema, table, operation, and field changes that triggered it.

For this example use case, we want to have a topic with the name `topic:<record id>` to which we're going to broadcast events.
For this example, you're going broadcast events to a topic named `topic:<record_id>`.

```sql
CREATE OR REPLACE FUNCTION public.your_table_changes()
Expand All @@ -948,7 +948,7 @@ END;
$$ LANGUAGE plpgsql;
```

Of note are the Postgres native trigger special variables used:
The Postgres native trigger special variables used are:

- `TG_OP` - the operation that triggered the function
- `TG_TABLE_NAME` - the table that caused the trigger
Expand Down Expand Up @@ -1000,7 +1000,7 @@ Broadcast Replay enables **private** channels to access messages that were sent

You can configure replay with the following options:

- **`since`** (Required): The epoch timestamp in milliseconds (e.g., `1697472000000`), specifying the earliest point from which messages should be retrieved.
- **`since`** (Required): The epoch timestamp in milliseconds (for example, `1697472000000`), specifying the earliest point from which messages should be retrieved.
- **`limit`** (Optional): The number of messages to return. This must be a positive integer, with a maximum value of 25.

<Tabs
Expand Down
Loading
Loading