diff --git a/ai/concepts/vector-search-overview.md b/ai/concepts/vector-search-overview.md
index eaca016526a25..b4303cb6179e0 100644
--- a/ai/concepts/vector-search-overview.md
+++ b/ai/concepts/vector-search-overview.md
@@ -72,3 +72,9 @@ To get started with TiDB Vector Search, see the following documents:
- [Get started with vector search using Python](/ai/quickstart-via-python.md)
- [Get started with vector search using SQL](/ai/quickstart-via-sql.md)
+
+## Related resources
+
+
+
+
diff --git a/br/backup-and-restore-overview.md b/br/backup-and-restore-overview.md
index 7f3ce7f9e6c64..78658de217629 100644
--- a/br/backup-and-restore-overview.md
+++ b/br/backup-and-restore-overview.md
@@ -179,3 +179,9 @@ The following table lists the compatibility matrix for log backups. Note that al
- [TiDB Snapshot Backup and Restore Guide](/br/br-snapshot-guide.md)
- [TiDB Log Backup and PITR Guide](/br/br-pitr-guide.md)
- [Backup Storages](/br/backup-and-restore-storages.md)
+
+## Related resources
+
+
+
+
diff --git a/choose-index.md b/choose-index.md
index 753a4e01334ac..7f570c05952a0 100644
--- a/choose-index.md
+++ b/choose-index.md
@@ -687,3 +687,9 @@ mysql> SHOW WARNINGS; -- cannot hit plan cache since the JSON_CONTAINS predicat
+---------+------+-------------------------------------------------------------------------------------------------------+
1 row in set (0.01 sec)
```
+
+## Related resources
+
+
+
+
diff --git a/clustered-indexes.md b/clustered-indexes.md
index 8e474debdd345..8e516b6871eea 100644
--- a/clustered-indexes.md
+++ b/clustered-indexes.md
@@ -215,3 +215,9 @@ The attribute [`AUTO_RANDOM`](/auto-random.md) can only be used on clustered ind
mysql> create table t (a bigint primary key nonclustered auto_random);
ERROR 8216 (HY000): Invalid auto random: column a is not the integer primary key, or the primary key is nonclustered
```
+
+## Related resources
+
+
+
+
diff --git a/dm/migrate-data-using-dm.md b/dm/migrate-data-using-dm.md
index 395b03d2c3d3f..42cfb498ad4d6 100644
--- a/dm/migrate-data-using-dm.md
+++ b/dm/migrate-data-using-dm.md
@@ -189,3 +189,9 @@ While the DM cluster is running, DM-master, DM-worker, and dmctl output the moni
- DM-master log directory: It is specified by the `--log-file` DM-master process parameter. If DM is deployed using TiUP, the log directory is `{log_dir}` in the DM-master node.
- DM-worker log directory: It is specified by the `--log-file` DM-worker process parameter. If DM is deployed using TiUP, the log directory is `{log_dir}` in the DM-worker node.
+
+## Related resources
+
+
+
+
diff --git a/dumpling-overview.md b/dumpling-overview.md
index d58c4c4ed6b52..63733c0279bde 100644
--- a/dumpling-overview.md
+++ b/dumpling-overview.md
@@ -451,3 +451,9 @@ In addition to output data files, you can define `--output-filename-template` to
| view | `{{fn .DB}}.{{fn .Table}}-schema-view` |
For example, using `--output-filename-template '{{define "table"}}{{fn .Table}}.$schema{{end}}{{define "data"}}{{fn .Table}}.{{printf "%09d" .Index}}{{end}}'`, Dumpling will write the schema of the table `db.tbl:normal` into a file named `tbl%3Anormal.$schema.sql`, and write the data into files `tbl%3Anormal.000000000.sql`, `tbl%3Anormal.000000001.sql`, and so on.
+
+## Related resources
+
+
+
+
diff --git a/get-started-with-tidb-lightning.md b/get-started-with-tidb-lightning.md
index 41b2905e36db6..5ddb96e9af5d1 100644
--- a/get-started-with-tidb-lightning.md
+++ b/get-started-with-tidb-lightning.md
@@ -114,3 +114,9 @@ If any error occurs, refer to [TiDB Lightning FAQs](/tidb-lightning/tidb-lightni
This tutorial briefly introduces what TiDB Lightning is and how to quickly deploy a TiDB Lightning cluster to import full backup data to the TiDB cluster.
For detailed features and usage about TiDB Lightning, refer to [TiDB Lightning Overview](/tidb-lightning/tidb-lightning-overview.md).
+
+## Related resources
+
+
+
+
diff --git a/integration-overview.md b/integration-overview.md
index 5988a1ffa59aa..8f60cd505c9b9 100644
--- a/integration-overview.md
+++ b/integration-overview.md
@@ -13,4 +13,10 @@ You can use TiCDC to replicate incremental data from TiDB to Confluent Cloud, an
## Integrate with Apache Kafka and Apache Flink
-You can use TiCDC to replicate incremental data from TiDB to Apache Kafka, and consume the data using Apache Flink. For details, see [Integrate with Apache Kafka and Apache Flink](/replicate-data-to-kafka.md).
\ No newline at end of file
+You can use TiCDC to replicate incremental data from TiDB to Apache Kafka, and consume the data using Apache Flink. For details, see [Integrate with Apache Kafka and Apache Flink](/replicate-data-to-kafka.md).
+
+## Related resources
+
+
+
+
diff --git a/overview.md b/overview.md
index 135820c1f12ff..411c6ffa5e37d 100644
--- a/overview.md
+++ b/overview.md
@@ -68,3 +68,9 @@ The following video introduces key features of TiDB.
- [TiDB Storage](/tidb-storage.md)
- [TiDB Computing](/tidb-computing.md)
- [TiDB Scheduling](/tidb-scheduling.md)
+
+## Related resources
+
+
+
+
diff --git a/partitioned-table.md b/partitioned-table.md
index 4b4c76b991aff..e39f55c7c7ef5 100644
--- a/partitioned-table.md
+++ b/partitioned-table.md
@@ -2103,3 +2103,9 @@ Currently, `static` pruning mode does not support plan cache for both prepared a
SET session tidb_partition_prune_mode = dynamic;
source gatherGlobalStats.sql
```
+
+## Related resources
+
+
+
+
diff --git a/production-deployment-using-tiup.md b/production-deployment-using-tiup.md
index 1f9ffeee9da93..e008bc48aa6d6 100644
--- a/production-deployment-using-tiup.md
+++ b/production-deployment-using-tiup.md
@@ -399,3 +399,9 @@ If you have deployed [TiCDC](/ticdc/ticdc-overview.md) along with the TiDB clust
- [TiCDC FAQs](/ticdc/ticdc-faq.md)
If you want to scale out or scale in your TiDB cluster without interrupting the online services, see [Scale a TiDB Cluster Using TiUP](/scale-tidb-using-tiup.md).
+
+## Related resources
+
+
+
+
diff --git a/releases/release-8.5.0.md b/releases/release-8.5.0.md
index 8b53e9c0a2dee..1a08fc5d92caa 100644
--- a/releases/release-8.5.0.md
+++ b/releases/release-8.5.0.md
@@ -443,3 +443,9 @@ We would like to thank the following contributors from the TiDB community:
- [LindaSummer](https://github.com/LindaSummer)
- [songzhibin97](https://github.com/songzhibin97)
- [Hexilee](https://github.com/Hexilee)
+
+## Related resources
+
+
+
+
diff --git a/sync-diff-inspector/sync-diff-inspector-overview.md b/sync-diff-inspector/sync-diff-inspector-overview.md
index 1beb0b762ac54..27ae9480dc39b 100644
--- a/sync-diff-inspector/sync-diff-inspector-overview.md
+++ b/sync-diff-inspector/sync-diff-inspector-overview.md
@@ -317,3 +317,9 @@ REPLACE INTO `sbtest`.`sbtest99`(`id`,`k`,`c`,`pad`) VALUES (3700000,2501808,'he
- sync-diff-inspector divides data into chunks first according to TiDB statistics and you need to guarantee the accuracy of the statistics. You can manually run the `analyze table {table_name}` command when the TiDB server's *workload is light*.
- Pay special attention to `table-rules`. If you configure `schema-pattern="test1"`, `table-pattern = "t_1"`, `target-schema="test2"` and `target-table = "t_2"`, the `test1`.`t_1` schema in the source database and the `test2`.`t_2` schema in the target database are compared. Sharding is enabled by default in sync-diff-inspector, so if the source database has a `test2`.`t_2` table, the `test1`.`t_1` table and `test2`.`t_2` table in the source database serving as sharding are compared with the `test2`.`t_2` table in the target database.
- The generated SQL file is only used as a reference for repairing data, and you need to confirm it before executing these SQL statements to repair data.
+
+## Related resources
+
+
+
+
diff --git a/ticdc/ticdc-overview.md b/ticdc/ticdc-overview.md
index d5db738fa7078..93546cea8b191 100644
--- a/ticdc/ticdc-overview.md
+++ b/ticdc/ticdc-overview.md
@@ -163,3 +163,9 @@ Currently, the following scenarios are not supported:
- Starting from v8.2.0, BR relaxes the restrictions on data restoration for TiCDC: if the `BackupTS` (the backup time) of the data to be restored is earlier than the changefeed [`CheckpointTS`](/ticdc/ticdc-classic-architecture.md#checkpointts) (the timestamp that indicates the current replication progress), BR can proceed with the data restoration normally. Considering that the `BackupTS` is usually much earlier, it can be assumed that in most scenarios, BR supports restoring data for a cluster with TiCDC replication tasks.
TiCDC only partially supports scenarios involving large transactions in the upstream. For details, refer to the [TiCDC FAQ](/ticdc/ticdc-faq.md#does-ticdc-support-replicating-large-transactions-is-there-any-risk), where you can find details on whether TiCDC supports replicating large transactions and any associated risks.
+
+## Related resources
+
+
+
+
diff --git a/tidb-cloud/branch-overview.md b/tidb-cloud/branch-overview.md
index f136517546f3b..0214d1a235bbb 100644
--- a/tidb-cloud/branch-overview.md
+++ b/tidb-cloud/branch-overview.md
@@ -57,3 +57,9 @@ If you need more quotas, [contact TiDB Cloud Support](/tidb-cloud/tidb-cloud-sup
## What's next
- [Learn how to manage branches](/tidb-cloud/branch-manage.md)
+
+## Related resources
+
+
+
+
diff --git a/tidb-cloud/integrate-tidbcloud-with-airbyte.md b/tidb-cloud/integrate-tidbcloud-with-airbyte.md
index 273e929e64cf7..21c2801e648a1 100644
--- a/tidb-cloud/integrate-tidbcloud-with-airbyte.md
+++ b/tidb-cloud/integrate-tidbcloud-with-airbyte.md
@@ -106,6 +106,8 @@ The following steps use TiDB as both a source and a destination. Other connector
- TiDB destination converts the `timestamp` type to the `varchar` type in default normalization mode. It happens because Airbyte converts the timestamp type to string during transmission, and TiDB does not support `cast ('2020-07-28 14:50:15+1:00' as timestamp)`.
- For some large ELT missions, you need to increase the parameters of [transaction restrictions](/develop/dev-guide-transaction-restraints.md#large-transaction-restrictions) in TiDB.
-## See also
+## Related resources
-[Using Airbyte to Migrate Data from TiDB Cloud to Snowflake](https://www.pingcap.com/blog/using-airbyte-to-migrate-data-from-tidb-cloud-to-snowflake/).
+
+
+
diff --git a/tidb-cloud/tidb-cloud-intro.md b/tidb-cloud/tidb-cloud-intro.md
index 64cac50471db0..2fbc23c230c1c 100644
--- a/tidb-cloud/tidb-cloud-intro.md
+++ b/tidb-cloud/tidb-cloud-intro.md
@@ -133,3 +133,11 @@ TiDB Cloud provides the following deployment options:
- Your VPC
You can connect to your TiDB Cloud resource via private endpoint connection or VPC peering connection. Refer to [Set Up Private Endpoint Connections](/tidb-cloud/set-up-private-endpoint-connections.md) or [Set up VPC Peering Connection](/tidb-cloud/set-up-vpc-peering-connections.md) for details.
+
+## Related resources
+
+
+
+
+
+
diff --git a/tidb-cloud/tidb-x-architecture.md b/tidb-cloud/tidb-x-architecture.md
index 12befe4aa48ad..52f703aebd7f0 100644
--- a/tidb-cloud/tidb-x-architecture.md
+++ b/tidb-cloud/tidb-x-architecture.md
@@ -157,3 +157,11 @@ The following table summarizes the architectural transitions from classic TiDB t
| DDL execution | DDL competes with user traffic for local CPU and I/O | DDL offloaded to elastic compute resources | Faster schema changes with more predictable latency |
| Cost model | Requires over-provisioning for peak workloads | Elastic TCO (pay-as-you-go) | Pay only for actual resource consumption |
| Backup | Data-volume dependent physical backup | Metadata-driven with object storage integration | Significantly faster backup operations |
+
+## Related resources
+
+
+
+
+
+
diff --git a/tidb-resource-control-ru-groups.md b/tidb-resource-control-ru-groups.md
index ab2a925dbdb87..1550d6a33bfb8 100644
--- a/tidb-resource-control-ru-groups.md
+++ b/tidb-resource-control-ru-groups.md
@@ -445,4 +445,14 @@ The resource control feature does not impact the regular usage of data import, e
* [CREATE RESOURCE GROUP](/sql-statements/sql-statement-create-resource-group.md)
* [ALTER RESOURCE GROUP](/sql-statements/sql-statement-alter-resource-group.md)
* [DROP RESOURCE GROUP](/sql-statements/sql-statement-drop-resource-group.md)
+<<<<<<< HEAD
* [RESOURCE GROUP RFC](https://github.com/pingcap/tidb/blob/master/docs/design/2022-11-25-global-resource-control.md)
+=======
+* [RESOURCE GROUP RFC](https://github.com/pingcap/tidb/blob/release-8.5/docs/design/2022-11-25-global-resource-control.md)
+
+## Related resources
+
+
+
+
+>>>>>>> ae49837df5 (add Related resources to more docs (#22883))