- )
-}
-
-export default GuidesTableOfContents
-export type { TOCHeader }
diff --git a/apps/docs/content/guides/api/rest/auto-generated-docs.mdx b/apps/docs/content/guides/api/rest/auto-generated-docs.mdx
index 7357392b58810..dce84addd9cf7 100644
--- a/apps/docs/content/guides/api/rest/auto-generated-docs.mdx
+++ b/apps/docs/content/guides/api/rest/auto-generated-docs.mdx
@@ -6,13 +6,15 @@ description: 'Supabase provides documentation that updates automatically.'
Supabase generates documentation in the [Dashboard](/dashboard) which updates as you make database changes.
-1. Go to the [API](/dashboard/project/_/api) page in the Dashboard.
-2. Select any table under **Tables and Views** in the sidebar.
-3. Switch between the JavaScript and the cURL docs using the tabs.
+1. Go to the [Project Settings](/dashboard/project/_/settings/general) page in the Dashboard.
+2. Select Data API -> Docs
+3. Select any table under **Tables and Views** in the sidebar.
+4. Switch between the JavaScript and the cURL docs using the tabs.
+5. You may also select the SUPABASE_KEY to use.
diff --git a/apps/docs/content/guides/platform/backups.mdx b/apps/docs/content/guides/platform/backups.mdx
index 55b43ec5041c0..a08c9585ebfc9 100644
--- a/apps/docs/content/guides/platform/backups.mdx
+++ b/apps/docs/content/guides/platform/backups.mdx
@@ -1,75 +1,39 @@
---
title: 'Database Backups'
-description: 'Learn about the available backup methods for your Supabase project.'
+description: 'Learn about backups for your Supabase project.'
---
-Database backups are an integral part of any disaster recovery plan. Disasters come in many shapes and sizes. It could be as simple as accidentally deleting a table column, the database crashing, or even a natural calamity wiping out the underlying hardware a database is running on. The risks and impact brought by these scenarios can never be fully eliminated, but only minimized or even mitigated. Having database backups is a form of insurance policy. They are essentially snapshots of the database at various points in time. When disaster strikes, database backups allow the project to be brought back to any of these points in time, therefore averting the crisis.
+We automatically back up all Free, Pro, Team, and Enterprise Plan projects on a daily basis. You can find backups in the [**Database** > **Backups**](/dashboard/project/_/database/backups/scheduled) section of the Dashboard.
-
-
-The Supabase team regularly monitors the status of backups. In case of any issues, you can [contact support](/dashboard/support/new). Also you can check out our [status page](https://status.supabase.com/) at any time.
-
-
+Pro Plan projects can access the last 7 days of daily backups. Team Plan projects can access the last 14 days of daily backups, while Enterprise Plan projects can access up to 30 days of daily backups. If you need more frequent backups, consider enabling [Point-in-Time Recovery](#point-in-time-recovery). We recommend that free tier plan projects regularly export their data using the [Supabase CLI `db dump` command](/docs/reference/cli/supabase-db-dump) and maintain off-site backups.
-
+
-Once a project is deleted all associated data will be permanently removed, including any backups stored in S3. This action is irreversible and should be carefully considered before proceeding.
+When you delete a project, we permanently remove all associated data, including any backups stored in S3. This action is irreversible, so consider it carefully before proceeding.
-## Types of backups
-
-Database backups can be categorized into two types: **logical** and **physical**. You can learn more about them [here](/blog/postgresql-physical-logical-backups).
-
-
-
-To enable physical backups, you have three options:
+
-- Enable [Point-in-Time Recovery (PITR)](#point-in-time-recovery)
-- [Increase your database size](/docs/guides/platform/database-size) to greater than 15GB
-- [Create a read replica](/docs/guides/platform/read-replicas)
-
-Once a project satisfies at least one of the requirements for physical backups then logical backups are no longer made. However, your project may revert back to logical backups if you remove add-ons.
+For security purposes, daily backups do not store passwords for custom roles, and you will not find them in downloadable files. If you restore from a daily backup and use custom roles, you will need to reset their passwords after the restoration completes.
-You can confirm your project's backup type by navigating to [**Database Backups > Scheduled backups**](/dashboard/project/_/database/backups/scheduled) and if you can download a backup then it is logical, otherwise it is physical.
-
-However, if your project has the Point-in-Time Recovery (PITR) add-on then the backups are physical and you can view them in [Database Backups > Point in time](/dashboard/project/_/database/backups/pitr).
-
-## Frequency of backups
-
-When deciding how often a database should be backed up, the key business metric Recovery Point Objective (RPO) should be considered. RPO is the threshold for how much data, measured in time, a business could lose when disaster strikes. This amount is fully dependent on a business and its underlying requirements. A low RPO would mean that database backups would have to be taken at an increased cadence throughout the day. Each Supabase project has access to two forms of backups, Daily Backups and Point-in-Time Recovery (PITR). The agreed upon RPO would be a deciding factor in choosing which solution best fits a project.
-
-If you enable PITR, Daily Backups will no longer be taken. PITR provides a finer granularity than Daily Backups, so it's unnecessary to run both.
+Database backups do not include objects you store via the Storage API, as the database only includes metadata about these objects. Restoring an old backup does not restore objects you deleted after that backup.
-
+## Backup and restore process
-Database backups do not include objects stored via the Storage API, as the database only includes metadata about these objects. Restoring an old backup does not restore objects that have been deleted since then.
+You can access daily backups in the [**Database** > **Backups**](/dashboard/project/_/database/backups/scheduled) section of the Dashboard and restore a project to any of the backups.
-
+You can restore your project to any of the backups. To generate a logical backup yourself, use the [Supabase CLI `db dump` command](/docs/reference/cli/supabase-db-dump).
-## Daily backups
+## Managing backups programmatically
-All Pro, Team and Enterprise Plan Supabase projects are backed up automatically on a daily basis. In terms of Recovery Point Objective (RPO), Daily Backups would be suitable for projects willing to lose up to 24 hours worth of data if disaster hits at the most inopportune time. If a lower RPO is required, enabling Point-in-Time Recovery should be considered.
-
-
-
-For security purposes, passwords for custom roles are not stored in daily backups, and will not be found in downloadable files. As such, if you are restoring from a daily backup and are using custom roles, you will need to set their passwords once more following a completed restoration.
-
-
-
-### Backup process [#daily-backups-process]
-
-The Postgres utility [pg_dumpall](https://www.postgresql.org/docs/current/app-pg-dumpall.html) is used to perform daily backups. An SQL file is generated, zipped up, and sent to our storage servers for safe keeping.
-
-You can access daily backups in the [Scheduled backups](/dashboard/project/_/database/backups/scheduled) settings in the Dashboard. Pro Plan projects can access the last 7 days' worth of daily backups. Team Plan projects can access the last 14 days' worth of daily backups, while Enterprise Plan projects can access up to 30 days' worth of daily backups. Users can restore their project to any one of the backups. If you wish to generate a logical backup on your own, you can do so through the [Supabase CLI](/docs/reference/cli/supabase-db-dump).
-
-You can also manage backups programmatically using the Management API:
+You can also manage backups programmatically [using the Management API](/docs/reference/api/v1-list-all-backups):
```bash
# Get your access token from https://supabase.com/dashboard/account/tokens
@@ -80,7 +44,7 @@ export PROJECT_REF="your-project-ref"
curl -H "Authorization: Bearer $SUPABASE_ACCESS_TOKEN" \
"https://api.supabase.com/v1/projects/$PROJECT_REF/database/backups"
-# Restore from a PITR (not logical) backup (replace ISO timestamp with desired restore point)
+# Restore from a PITR backup (replace Unix timestamp with desired restore point)
curl -X POST "https://api.supabase.com/v1/projects/$PROJECT_REF/database/backups/restore-pitr" \
-H "Authorization: Bearer $SUPABASE_ACCESS_TOKEN" \
-H "Content-Type: application/json" \
@@ -89,126 +53,87 @@ curl -X POST "https://api.supabase.com/v1/projects/$PROJECT_REF/database/backups
}'
```
-#### Backup process for large databases
-
-Databases larger than 15GB[^1], if they're on a recent build[^2] of the Supabase platform, get automatically transitioned[^3] to use daily physical backups. Physical backups are a more performant backup mechanism that lowers the overhead and impact on the database being backed up, and also avoids holding locks on objects in your database for a long period of time. While restores are unaffected, the backups created using this method cannot be downloaded from the Backups section of the dashboard.
-
-This class of physical backups only allows for recovery to a fixed time each day, similar to daily backups. You can upgrade to [PITR](#point-in-time-recovery) for access to more granular recovery options.
+### Restoration process
-Once a database is transitioned to using physical backups, it continues to use physical backups, even if the database size falls back below the threshold for the transition.
+When selecting a backup to restore to, choose the closest available backup made before your desired restore point. You can always choose earlier backups, but consider how many days of data you might lose.
-[^1]: The threshold for transitioning will be slowly lowered over time. Eventually, all projects will be transitioned to using physical backups.
-[^2]: Projects created or upgraded after the 14th of July 2022 are eligible.
-[^3]: The transition to physical backups is handled transparently and does not require any user intervention. It involves a single restart of the database to pick up new configuration that can only be loaded at start; the expected downtime for the restart is a few seconds.
+The Dashboard prompts you for confirmation before proceeding with the restoration. The project is inaccessible during this process, so plan for downtime beforehand. Downtime depends on the size of the database—the larger it is, the longer the downtime will be.
-### Restoration process [#daily-backups-restoration-process]
+After you confirm, we trigger the process to restore the desired backup data to your project. The dashboard will display a notification once the restoration completes.
-When selecting a backup to restore to, select the closest available one made before the desired point in time to restore to. Earlier backups can always be chosen too but do consider the number of days' worth of data that could be lost.
-
-The Dashboard will then prompt for a confirmation before proceeding with the restoration. The project will be inaccessible following this. As such, do ensure to allot downtime beforehand. This is dependent on the size of the database. The larger it is, the longer the downtime will be. Once the confirmation has been given, the underlying SQL of the chosen backup is then run against the project. The Postgres utility [psql](https://www.postgresql.org/docs/current/app-psql.html) is used to facilitate the restoration. The Dashboard will display a notification once the restoration completes.
-
-If your project is using subscriptions or replication slots, you will need to drop them prior to the restoration, and re-create them afterwards. The slot used by Realtime is exempted from this, and will be handled automatically.
+If your project uses subscriptions or replication slots, you need to drop them before the restoration and re-create them afterwards. We exempt the slot used by Realtime from this requirement and handle it automatically.
{/* screenshot of the Dashboard of the project completing restoration */}
## Point-in-Time recovery
-Point-in-Time Recovery (PITR) allows a project to be backed up at much shorter intervals. This provides users an option to restore to any chosen point of up to seconds in granularity. Even with daily backups, a day's worth of data could still be lost. With PITR, backups could be performed up to the point of disaster.
+Point-in-Time Recovery (PITR) allows you to back up a project at shorter intervals, giving you the option to restore to any chosen point with up to seconds of granularity. Even with daily backups, you could still lose a day's worth of data. With PITR, you can back up to the point of disaster.
Pro, Team and Enterprise Plan projects can enable PITR as an add-on.
-Projects interested in PITR will also need to use at least a Small compute add-on, in order to ensure smooth functioning.
+Projects that want to use PITR must also use at least a Small compute add-on to ensure smooth functioning.
-
+
+
+
-If you enable PITR, Daily Backups will no longer be taken. PITR provides a finer granularity than Daily Backups, so it's unnecessary to run both.
+ As [covered in this blog post](/blog/postgresql-physical-logical-backups), a combination of physical backups and [Write Ahead Log (WAL)](https://www.postgresql.org/docs/current/wal-intro.html) file archiving makes PITR possible. Physical backups provide a snapshot of the underlying directory of the database, while WAL files contain records of every change the database processes.
-
-
-When you disable PITR, all new backups will still be taken as physical backups only. Physical backups can still be used for restoration, but they are not available for direct download. If you need to download a backup after PITR is disabled, you’ll need to take a manual [logical backup using the Supabase CLI or pg_dump](/docs/guides/platform/migrating-within-supabase/backup-restore#backup-database-using-the-cli).
+ We use [WAL-G](https://github.com/wal-g/wal-g), an open source archival and restoration tool, to handle both aspects of PITR. Daily, we take a snapshot of the database and send it to our storage servers. Throughout the day, as database transactions occur, we generate and upload WAL files.
-
+ By default, we back up WAL files at two-minute intervals. If these files exceed a certain file size threshold, we back them up immediately. During periods of high transaction volume, WAL file backups therefore become more frequent. Conversely, when the database has no activity, we do not make WAL file backups. Overall, in the worst case scenario, PITR achieves a Recovery Point Objective (RPO) of two minutes.
-If PITR has been disabled, logical backups remain available until they pass the backup retention period for your plan. After that window passes, only physical backups will be shown.
+
-
+
+
-### Backup process [#pitr-backup-process]
+
-As discussed [here](/blog/postgresql-physical-logical-backups), PITR is made possible by a combination of taking physical backups of a project, as well as archiving [Write Ahead Log (WAL)](https://www.postgresql.org/docs/current/wal-intro.html) files. Physical backups provide a snapshot of the underlying directory of the database, while WAL files contain records of every change made in the database.
+If you enable PITR, we will no longer take Daily Backups. PITR provides finer granularity than Daily Backups, so running both is unnecessary.
-Supabase uses [WAL-G](https://github.com/wal-g/wal-g), an open source archival and restoration tool, to handle both aspects of PITR. On a daily basis, a snapshot of the database is taken and sent to our storage servers. Throughout the day, as database transactions occur, WAL files are generated and uploaded.
+
-By default, WAL files are backed up at two minute intervals. If these files cross a certain file size threshold, they are backed up immediately. As such, during periods of high amount of transactions, WAL file backups become more frequent. Conversely, when there is no activity in the database, WAL file backups are not made. Overall, this would mean that at the worst case scenario or disaster, the PITR achieves a Recovery Point Objective (RPO) of two minutes.
+### Backup process

-You can access PITR in the [Point in Time](/dashboard/project/_/database/backups/pitr) settings in the Dashboard. The recovery period of a project is indicated by the earliest and latest points of recoveries displayed in your preferred timezone. If need be, the maximum amount of this recovery period can be modified accordingly.
+You can access PITR in the [Point in Time](/dashboard/project/_/database/backups/pitr) settings in the Dashboard. The recovery period of a project is shown by the earliest and latest recovery points displayed in your preferred timezone. You can change the maximum recovery period if needed.
-Note that the latest restore point of the project could be significantly far from the current time. This occurs when there has not been any recent activity in the database, and therefore no WAL file backups have been made recently. This is perfectly fine as the state of the database at the latest point of recovery would still be indicative of the state of the database at the current time given that no transactions have been made in between.
+The latest restore point of the project could be significantly behind the current time. This occurs when the database has had no recent activity, and therefore we have not made any recent WAL file backups. However, the state of the database at the latest recovery point still reflects the current state of the database, given that no transactions have occurred in between.
-### Restoration process [#pitr-restoration-process]
+### Restoration process

-A date and time picker will be provided upon pressing the `Start a restore` button. The process will only proceed if the selected date and time fall within the earliest and latest points of recoveries.
+A date and time picker appears when you click the **Start a restore** button. The process only proceeds if the selected date and time fall within the earliest and latest recovery points.

-After locking in the desired point in time to recover to, The Dashboard will then prompt for a review and confirmation before proceeding with the restoration. The project will be inaccessible following this. As such, do ensure to allot for downtime beforehand. This is dependent on the size of the database. The larger it is, the longer the downtime will be. Once the confirmation has been given, the latest physical backup available is downloaded to the project and the database is partially restored. WAL files generated after this physical backup up to the specified point-in-time are then downloaded. The underlying records of transactions in these files are replayed against the database to complete the restoration. The Dashboard will display a notification once the restoration completes.
+After selecting your desired recovery point, the Dashboard prompts you to review and confirm before proceeding with the restoration. The project is inaccessible during this process, so plan for downtime beforehand. Downtime depends on the size of the database—the larger it is, the longer the downtime will be. After you confirm, we download the latest available physical backup to the project and partially restore the database. We then download the WAL files generated after this physical backup up to your specified point in time. We replay the underlying transaction records in these files against the database to complete the restoration. The Dashboard will display a notification once the restoration completes.
<$Show if="billing:all">
<$Partial path="billing/pricing/pricing_pitr.mdx" />
$Show>
-## Restore to a new project
-
-See the [Duplicate Project docs](/docs/guides/platform/clone-project).
-
-## Troubleshooting
-
-### Logical backups
-
-#### `search_path` issues
-
-During the `pg_restore` process, the `search_path` is set to an empty string for predictability, and security. Using unqualified references to functions or relations can cause restorations using logical backups to fail, as the database will not be able to locate the function or relation being referenced. This can happen even if the database functions without issues during normal operations, as the `search_path` is usually set to include several schemas during normal operations. Therefore, you should always use schema-qualified names within your SQL code.
-
-You can refer to [an example PR](https://github.com/supabase/supabase/pull/28393/files) on how to update SQL code to use schema-qualified names.
+### Downloading backups after disabling PITR
-#### Invalid check constraints
+When you disable PITR, we still take all new backups as physical backups only. You can still use physical backups for restoration, but they are not available for direct download. If you need to download a backup after disabling PITR, you need to take a manual [legacy logical backup using the Supabase CLI or pg_dump](/docs/guides/platform/migrating-within-supabase/backup-restore#backup-database-using-the-cli).
-Postgres requires that [check constraints](https://www.postgresql.org/docs/current/ddl-constraints.html#DDL-CONSTRAINTS-CHECK-CONSTRAINTS) be:
-
-1. immutable
-1. not reference table data other than the new or updated row being checked
-
-Violating these requirements can result in numerous failure scenarios, including during logical restorations.
-
-Common examples of check constraints that can result in such failures are:
-
-- validating against the current time, e.g. that the row being inserted references a future event
-- validating the contents of a row against the contents of another table
-
-#### Views that reference themselves
-
-Views that directly or indirectly reference themselves will cause logical restores to fail due to cyclic dependency errors. These views are also invalid and unusable in Postgres, and any query against them will result in a runtime error.
-
-**Example:**
-
-```
--- Direct self-reference
-CREATE VIEW my_view AS
- SELECT * FROM my_view;
-
--- Indirect circular reference
-CREATE VIEW v1 AS SELECT * FROM v2;
-CREATE VIEW v2 AS SELECT * FROM v1;
-```
-
--- Drop the offending view from your database, or delete them from the logical backup to make it restorable.
+## Restore to a new project
-Postgres documentation [views](https://www.postgresql.org/docs/current/sql-createview.html)
+See the [Duplicate Project docs](/docs/guides/platform/clone-project).
diff --git a/apps/docs/content/guides/self-hosting/self-hosted-oauth.mdx b/apps/docs/content/guides/self-hosting/self-hosted-oauth.mdx
index 702cdd8756547..4297d06127ce5 100644
--- a/apps/docs/content/guides/self-hosting/self-hosted-oauth.mdx
+++ b/apps/docs/content/guides/self-hosting/self-hosted-oauth.mdx
@@ -70,11 +70,14 @@ GOOGLE_SECRET=your-client-secret
Uncomment the corresponding `GOTRUE_EXTERNAL_` lines in the `auth` service's `environment`:
-```
-GOTRUE_EXTERNAL_GOOGLE_ENABLED: ${GOOGLE_ENABLED}
-GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
-GOTRUE_EXTERNAL_GOOGLE_SECRET: ${GOOGLE_SECRET}
-GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
+```yaml
+auth:
+ environment:
+ # ... existing variables ...
+ GOTRUE_EXTERNAL_GOOGLE_ENABLED: ${GOOGLE_ENABLED}
+ GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
+ GOTRUE_EXTERNAL_GOOGLE_SECRET: ${GOOGLE_SECRET}
+ GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
```
@@ -94,7 +97,7 @@ docker compose up -d --force-recreate --no-deps auth
Check that the provider is enabled:
```sh
-curl https:///auth/v1/settings
+curl -H 'apikey: your-anon-key' https:///auth/v1/settings
```
The response should include your provider under `external`:
@@ -141,11 +144,14 @@ GOOGLE_SECRET=your-google-client-secret
**`docker-compose.yml`:**
-```
-GOTRUE_EXTERNAL_GOOGLE_ENABLED: ${GOOGLE_ENABLED}
-GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
-GOTRUE_EXTERNAL_GOOGLE_SECRET: ${GOOGLE_SECRET}
-GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
+```yaml
+auth:
+ environment:
+ # ... existing variables ...
+ GOTRUE_EXTERNAL_GOOGLE_ENABLED: ${GOOGLE_ENABLED}
+ GOTRUE_EXTERNAL_GOOGLE_CLIENT_ID: ${GOOGLE_CLIENT_ID}
+ GOTRUE_EXTERNAL_GOOGLE_SECRET: ${GOOGLE_SECRET}
+ GOTRUE_EXTERNAL_GOOGLE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
```
@@ -171,11 +177,14 @@ GITHUB_SECRET=your-github-client-secret
**`docker-compose.yml`:**
-```
-GOTRUE_EXTERNAL_GITHUB_ENABLED: ${GITHUB_ENABLED}
-GOTRUE_EXTERNAL_GITHUB_CLIENT_ID: ${GITHUB_CLIENT_ID}
-GOTRUE_EXTERNAL_GITHUB_SECRET: ${GITHUB_SECRET}
-GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
+```yaml
+auth:
+ environment:
+ # ... existing variables ...
+ GOTRUE_EXTERNAL_GITHUB_ENABLED: ${GITHUB_ENABLED}
+ GOTRUE_EXTERNAL_GITHUB_CLIENT_ID: ${GITHUB_CLIENT_ID}
+ GOTRUE_EXTERNAL_GITHUB_SECRET: ${GITHUB_SECRET}
+ GOTRUE_EXTERNAL_GITHUB_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
```
@@ -207,13 +216,16 @@ AZURE_SECRET=your-azure-client-secret
**`docker-compose.yml`:**
-```
-GOTRUE_EXTERNAL_AZURE_ENABLED: ${AZURE_ENABLED}
-GOTRUE_EXTERNAL_AZURE_CLIENT_ID: ${AZURE_CLIENT_ID}
-GOTRUE_EXTERNAL_AZURE_SECRET: ${AZURE_SECRET}
-GOTRUE_EXTERNAL_AZURE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
-## Optional: uncomment for tenant-specific Azure login
-# GOTRUE_EXTERNAL_AZURE_URL: ${AZURE_URL}
+```yaml
+auth:
+ environment:
+ # ... existing variables ...
+ GOTRUE_EXTERNAL_AZURE_ENABLED: ${AZURE_ENABLED}
+ GOTRUE_EXTERNAL_AZURE_CLIENT_ID: ${AZURE_CLIENT_ID}
+ GOTRUE_EXTERNAL_AZURE_SECRET: ${AZURE_SECRET}
+ GOTRUE_EXTERNAL_AZURE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
+ ## Optional: uncomment for tenant-specific Azure login
+ # GOTRUE_EXTERNAL_AZURE_URL: ${AZURE_URL}
```
@@ -236,11 +248,14 @@ APPLE_SECRET=your-generated-jwt-client-secret
**`docker-compose.yml`:**
-```
-GOTRUE_EXTERNAL_APPLE_ENABLED: ${APPLE_ENABLED}
-GOTRUE_EXTERNAL_APPLE_CLIENT_ID: ${APPLE_CLIENT_ID}
-GOTRUE_EXTERNAL_APPLE_SECRET: ${APPLE_SECRET}
-GOTRUE_EXTERNAL_APPLE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
+```yaml
+auth:
+ environment:
+ # ... existing variables ...
+ GOTRUE_EXTERNAL_APPLE_ENABLED: ${APPLE_ENABLED}
+ GOTRUE_EXTERNAL_APPLE_CLIENT_ID: ${APPLE_CLIENT_ID}
+ GOTRUE_EXTERNAL_APPLE_SECRET: ${APPLE_SECRET}
+ GOTRUE_EXTERNAL_APPLE_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
```
@@ -276,12 +291,15 @@ KEYCLOAK_URL=https://keycloak.example.com/realms/myrealm
**`docker-compose.yml` passthrough:**
-```
-GOTRUE_EXTERNAL_KEYCLOAK_ENABLED: ${KEYCLOAK_ENABLED}
-GOTRUE_EXTERNAL_KEYCLOAK_CLIENT_ID: ${KEYCLOAK_CLIENT_ID}
-GOTRUE_EXTERNAL_KEYCLOAK_SECRET: ${KEYCLOAK_SECRET}
-GOTRUE_EXTERNAL_KEYCLOAK_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
-GOTRUE_EXTERNAL_KEYCLOAK_URL: ${KEYCLOAK_URL}
+```yaml
+auth:
+ environment:
+ # ... existing variables ...
+ GOTRUE_EXTERNAL_KEYCLOAK_ENABLED: ${KEYCLOAK_ENABLED}
+ GOTRUE_EXTERNAL_KEYCLOAK_CLIENT_ID: ${KEYCLOAK_CLIENT_ID}
+ GOTRUE_EXTERNAL_KEYCLOAK_SECRET: ${KEYCLOAK_SECRET}
+ GOTRUE_EXTERNAL_KEYCLOAK_REDIRECT_URI: ${API_EXTERNAL_URL}/auth/v1/callback
+ GOTRUE_EXTERNAL_KEYCLOAK_URL: ${KEYCLOAK_URL}
```
@@ -331,7 +349,12 @@ For each provider, you need at minimum `ENABLED`, `CLIENT_ID`, `SECRET`, and `RE
## Test the login flow
-You can test OAuth with the following minimal `index.html` served, e.g., via `python -m http.server 3000`:
+You can test OAuth with the following minimal HTML page:
+
+- Save the code below to `index.html`
+- Start `python -m http.server 3000` in the same directory
+- Make sure `SITE_URL` is set to `http://localhost:3000` in your self-hosted Supabase `.env` configuration
+- Open your browser and go to `http://localhost:3000`
```html
@@ -373,12 +396,6 @@ You can test OAuth with the following minimal `index.html` served, e.g., via `py