- Stereology: Quantifying 3D structure from 2D sections.
- Hand-crafted Descriptors: Intuitive but biased (e.g., aspect ratio).
- Information Bottleneck: Scalar metrics discard crucial subtle details.
- Artificial Neuron: Weighted sum plus non-linear activation.
- Activations: ReLU enables deep learning; Sigmoid/Tanh provide non-linearity.
- Universal Approximation: One hidden layer can approximate any continuous function.
- MLP Topology: Stacked layers build abstract representations.
- Standard metrics: Grain size, phase fractions.
- Quantitative Metallography.
- Limitations of hand-crafted features.
- Concept of Representation.
- Incomplete expert features.
- Shift to learned embeddings.
- Mathematical Neuron.
- Weights and Biases.
- Non-linear activations.
- Forward propagation.
- Stacking layers for abstraction.
- Hidden layers.
- Universal approximators.
- Moving to images (CNNs).
- Trusting learned vs. classical metrics.
Summary for ML-PC Week 4:
- Transitions from classical stereology to learned representations.
- Reviews limits of hand-crafted microstructure metrics.
- Introduces the artificial neuron, weights, and activations.
- Builds the framework for Multi-Layer Perceptrons (MLPs) to automate feature extraction.