- Over/Underfitting: Balancing high variance (noise fitting) vs. high bias (too simple).
- Tradeoff: Minimizing total error = Bias² + Variance + Noise.
- Regularization: L1 (Lasso) and L2 (Ridge) penalize complex models.
- Cross-Validation: K-Fold provides stable performance estimates.
- Leave-One-Out (LOO): For extremely small datasets.
- Stratification: Maintains representative class/property distributions.
- Sensitivity Analysis: Quantifying output change vs. input perturbation.
- Process Windows: Finding optimal, stable regions insensitive to industrial noise.
- Robust Design: Choosing stability over a sharp performance peak.
- High accuracy is insufficient for deployment.
- Defining "Model Trust".
- U-shaped error curve.
- Visualizing overfitting.
- Double Descent.
- K-Fold, Stratified, Grouped CV.
- Nested CV.
- Reliability metrics.
- L2 (Ridge) and L1 (Lasso).
- Early Stopping.
- Mapping safe processing zones.
- Sensitivity analysis applications.
- Casting/3D Printing stability.
Summary for ML-PC Week 8:
- Shifts focus from performance to Model Reliability.
- Explores Bias-Variance tradeoff and generalization.
- Details robust validation (K-Fold, Stratified CV).
- Uses sensitivity analysis to map stable Process Windows.