From a185c33115758d0d0066784fd0e409dc356e06d5 Mon Sep 17 00:00:00 2001 From: Dan Stefaniuk Date: Tue, 24 Jun 2025 15:10:40 +0100 Subject: [PATCH 1/5] Incorporate feedback from the tech leads --- practices/feature-toggling.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/practices/feature-toggling.md b/practices/feature-toggling.md index 1610510c..e5a2b4ab 100644 --- a/practices/feature-toggling.md +++ b/practices/feature-toggling.md @@ -67,6 +67,9 @@ According to Martin Fowler, toggles typically fall into the following categories - **Permission toggles**: Enable features based on user roles or attributes. > [!NOTE] +> For teams practising daily integration and deployment, feature flagging is a foundational capability. It enables separation of deployment from release, allowing incomplete features to be merged, deployed, and safely hidden from users until ready. However, where a product’s needs are focused on basic **Canary Releasing** or **A/B testing**, and the aspiration for daily deployment to production is yet to be realised, teams may choose to start with the native capabilities of their cloud provider (e.g., Azure deployment slots, AWS Lambda aliases, or traffic-routing rules, etc.). These offer infrastructure-level rollout control with minimal additional complexity or reliance on third-party tooling. While dedicated feature toggling services provide powerful feature-level targeting and experimentation, they also introduce external dependencies that may be unnecessary for very simple workloads. Therefore, teams may decide to start simple and evolve their approach as feature granularity, targeting precision, and user needs increase. + +> [!WARNING] > While permission toggles can target users by role or attribute during a rollout or experiment, they are not a replacement for robust, permanent role-based access control (RBAC). Use RBAC as a separate, first-class mechanism for managing user permissions. ## Managing toggles From 83520d86229197c587fcf9598d2f391f5f6c8920 Mon Sep 17 00:00:00 2001 From: Dan Stefaniuk Date: Tue, 24 Jun 2025 15:19:03 +0100 Subject: [PATCH 2/5] Fix markdown formatting --- practices/feature-toggling.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/practices/feature-toggling.md b/practices/feature-toggling.md index e5a2b4ab..6cb78fe9 100644 --- a/practices/feature-toggling.md +++ b/practices/feature-toggling.md @@ -62,13 +62,14 @@ Toggles can be defined statically (e.g., environment variable or config file) or According to Martin Fowler, toggles typically fall into the following categories: - **Release toggles**: Allow incomplete features to be merged and deployed. -- **Experiment toggles**: Support A/B or multivariate testing. -- **Ops toggles**: Provide operational control for performance or reliability. -- **Permission toggles**: Enable features based on user roles or attributes. > [!NOTE] > For teams practising daily integration and deployment, feature flagging is a foundational capability. It enables separation of deployment from release, allowing incomplete features to be merged, deployed, and safely hidden from users until ready. However, where a product’s needs are focused on basic **Canary Releasing** or **A/B testing**, and the aspiration for daily deployment to production is yet to be realised, teams may choose to start with the native capabilities of their cloud provider (e.g., Azure deployment slots, AWS Lambda aliases, or traffic-routing rules, etc.). These offer infrastructure-level rollout control with minimal additional complexity or reliance on third-party tooling. While dedicated feature toggling services provide powerful feature-level targeting and experimentation, they also introduce external dependencies that may be unnecessary for very simple workloads. Therefore, teams may decide to start simple and evolve their approach as feature granularity, targeting precision, and user needs increase. +- **Experiment toggles**: Support A/B or multivariate testing. +- **Ops toggles**: Provide operational control for performance or reliability. +- **Permission toggles**: Enable features based on user roles or attributes. + > [!WARNING] > While permission toggles can target users by role or attribute during a rollout or experiment, they are not a replacement for robust, permanent role-based access control (RBAC). Use RBAC as a separate, first-class mechanism for managing user permissions. From 410c94d948bdd1c2800254190ef9e7b4eaaaea9c Mon Sep 17 00:00:00 2001 From: Dan Stefaniuk Date: Tue, 24 Jun 2025 15:46:50 +0100 Subject: [PATCH 3/5] Address peer review comments --- practices/feature-toggling.md | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/practices/feature-toggling.md b/practices/feature-toggling.md index 6cb78fe9..aa073840 100644 --- a/practices/feature-toggling.md +++ b/practices/feature-toggling.md @@ -64,9 +64,13 @@ According to Martin Fowler, toggles typically fall into the following categories - **Release toggles**: Allow incomplete features to be merged and deployed. > [!NOTE] -> For teams practising daily integration and deployment, feature flagging is a foundational capability. It enables separation of deployment from release, allowing incomplete features to be merged, deployed, and safely hidden from users until ready. However, where a product’s needs are focused on basic **Canary Releasing** or **A/B testing**, and the aspiration for daily deployment to production is yet to be realised, teams may choose to start with the native capabilities of their cloud provider (e.g., Azure deployment slots, AWS Lambda aliases, or traffic-routing rules, etc.). These offer infrastructure-level rollout control with minimal additional complexity or reliance on third-party tooling. While dedicated feature toggling services provide powerful feature-level targeting and experimentation, they also introduce external dependencies that may be unnecessary for very simple workloads. Therefore, teams may decide to start simple and evolve their approach as feature granularity, targeting precision, and user needs increase. +> For teams practising daily integration and deployment, feature flagging is a foundational capability. It enables separation of deployment from release, allowing incomplete features to be merged, deployed, and safely hidden from users until ready. Where a product’s needs are focused on basic **canary releasing**, and the aspiration for daily deployment to production is yet to be realised, teams may choose to start with the native capabilities of their cloud provider (e.g., Azure deployment slots, AWS Lambda aliases, or traffic-routing rules). These offer infrastructure-level rollout control with minimal additional complexity or reliance on third-party tooling. - **Experiment toggles**: Support A/B or multivariate testing. + +> [!NOTE] +> True **A/B testing**, involving consistent user assignment, behavioural metrics, and statistical comparison, typically requires additional tooling to manage variant bucketing, exposure logging, and result analysis. In such cases, dedicated services may be more appropriate. Teams are encouraged to start simple, and evolve their approach as feature granularity, targeting precision, analytical needs, and user expectations increase. + - **Ops toggles**: Provide operational control for performance or reliability. - **Permission toggles**: Enable features based on user roles or attributes. From 32b239111ef40b0c8ff851c97bacc5c38a9cfa5e Mon Sep 17 00:00:00 2001 From: Dan Stefaniuk Date: Wed, 25 Jun 2025 10:26:03 +0100 Subject: [PATCH 4/5] Address peer review comments --- practices/feature-toggling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/practices/feature-toggling.md b/practices/feature-toggling.md index aa073840..97207c1a 100644 --- a/practices/feature-toggling.md +++ b/practices/feature-toggling.md @@ -69,7 +69,7 @@ According to Martin Fowler, toggles typically fall into the following categories - **Experiment toggles**: Support A/B or multivariate testing. > [!NOTE] -> True **A/B testing**, involving consistent user assignment, behavioural metrics, and statistical comparison, typically requires additional tooling to manage variant bucketing, exposure logging, and result analysis. In such cases, dedicated services may be more appropriate. Teams are encouraged to start simple, and evolve their approach as feature granularity, targeting precision, analytical needs, and user expectations increase. +> True **A/B testing**, involving consistent user assignment, behavioural metrics, and statistical comparison, typically requires additional tooling to manage variant bucketing, exposure logging, and result analysis. In such cases, dedicated services are more appropriate, as the solutions can be subtle and mathematically complex (be aware of ["Optional Stopping" or "Peeking Bias"](https://www.evanmiller.org/how-not-to-run-an-ab-test.html)). We do not want to have to re-invent them. Teams are encouraged to start simple, and evolve their approach as feature granularity, targeting precision, analytical needs, and user expectations increase. - **Ops toggles**: Provide operational control for performance or reliability. - **Permission toggles**: Enable features based on user roles or attributes. From 37216a7648efef214a50633e9c6277946003dd23 Mon Sep 17 00:00:00 2001 From: Dan Stefaniuk Date: Wed, 25 Jun 2025 10:32:15 +0100 Subject: [PATCH 5/5] Address peer review comments --- practices/feature-toggling.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/practices/feature-toggling.md b/practices/feature-toggling.md index 97207c1a..579b30e2 100644 --- a/practices/feature-toggling.md +++ b/practices/feature-toggling.md @@ -69,7 +69,7 @@ According to Martin Fowler, toggles typically fall into the following categories - **Experiment toggles**: Support A/B or multivariate testing. > [!NOTE] -> True **A/B testing**, involving consistent user assignment, behavioural metrics, and statistical comparison, typically requires additional tooling to manage variant bucketing, exposure logging, and result analysis. In such cases, dedicated services are more appropriate, as the solutions can be subtle and mathematically complex (be aware of ["Optional Stopping" or "Peeking Bias"](https://www.evanmiller.org/how-not-to-run-an-ab-test.html)). We do not want to have to re-invent them. Teams are encouraged to start simple, and evolve their approach as feature granularity, targeting precision, analytical needs, and user expectations increase. +> True **A/B testing**, involving consistent user assignment, behavioural metrics, and statistical comparison, typically requires additional tooling to manage variant bucketing, exposure logging, and result analysis. In such cases, dedicated services are more appropriate, as the solutions can be subtle and mathematically complex (consider issues such as ["Optional Stopping" or "Peeking Bias"](https://www.evanmiller.org/how-not-to-run-an-ab-test.html)). We do not want to have to re-invent them. Teams are encouraged to start simple, and evolve their approach as feature granularity, targeting precision, analytical needs, and user expectations increase. - **Ops toggles**: Provide operational control for performance or reliability. - **Permission toggles**: Enable features based on user roles or attributes.